var/home/core/zuul-output/0000755000175000017500000000000015130155645014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015130167601015471 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000322234315130167426020264 0ustar corecore`ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD .l3ꋴI_翪|mvşo#oVݏKf+ovpZjl!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!frH_HI\:U}UE$J @ٚeZE0(8Ŗ%?K0D"\KjPQ>Y{Ÿ>14`SČ.HPdp12 (V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~w w?38v?Lsb s "NDr3\{J KP/ߢ/emPW֦?>Y5p&nr0:9%Ws$Wc0FS=>Qp:!DE5^9-0 R2ڲ]ew۵jI\'iħ1 {\FPG"$$ {+!˨?EP' =@~edF \r!٤ã_e=P1W3c +A)9V ]rVmeK\4ɿ 8'*MTox6[qn2XwK\^-ޖA2U]E_Dm5^"d*MQǜq؈f+C/tfRxeKboc5IvsK TV}uu}k s" &ﱏҞO/ont~]5\ʅSHwӍq6Ung'!! e#@\YV,4&`-6 E=߶EYE=P?~݆]Ōv ton5 lvǫV*k*5]^RFlj]R#Uz |wmTeM {u:s@ -Mn3䦴mHЭj !'T9Xsl o:d lzzMvYź ^ ٲAPm쪊m\9htwmjQ\c5&,|^C.SS P󂏛o n8Fkbs/&a[s~W &_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢N^8 n`т ti6{b?-X;|im̓'!n&.TU n$%rIwP(fwnv :Nb=X~ax`;Vw}wvRS1q!z989ep 5wݫK]0/k<'dzM2dk–fl:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^PrKo ^ƒA"ZF[Bt9 @bekۜ)߄ PQY4 zF u } hsߺi!4ELy!uG7V]-؆p Qo^Cr6q,"u%neDdF O>y_:,eXX맻c5ޖSweيO4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 * kTuwmUr%ԀjƮĀdU#ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮlK*Gڞq ~]lrk`8"jѡ6֙ɖ@/#:=G<$w 24 6e/!~=f)Q UbshY5mseڠ5_mx]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[,_ qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")Rjv<Ҋ(.GGzpFL`1CS$Ǥ46i*#zN9tT :<XK*ɤ{ U܋N5 l͖h"褁l^=UF^BcAw`g*7R(#ғ [K&#Mp'XގL=s5^:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2uZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|YץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaO<>5ܙQ~O_e琇[eCH/2iFe%1A%!s#JXBz-Bȃ82,߫ ~c a^ 5%Di&hWZ n193T9Щp-NC֤^pY鳡Śk 2` Pf˞jJc%e0nx MScƊզ T_jX3&:a}@t3XPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{D1kl)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua Ȼݔhvׄӫ A^%f+[`sb˟ _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|//>l8MrHID2VSsMX^"NۯDc558c&'K0L /C5YDqNe~ض˸nErc֋@aw*r܀0 a {RQ^xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$wMm[eG`̵E$uLrk-$_{$# $B*hN/ٟPE[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэor,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I][oȒ+_%x';u왌qgt& 0dS⚷I")YvlbJ$Uuuu]U{ E$R-h{ E""dCY; զ(eJR,=if~_ՠ&0~_u BT"J)J7@(R\(foR[Ũy4͝Yq{\~)ϣB6Id_M|ުA ħ0'B!> =t;6PL|ww8 <"8e,lM׺Z^+MDZK!Cr櫐d<6uӹ Ӿ ]X {Dh&êkAa+=(oc4 7*tMs=%B =˵uX}+innz瘿DH~}y#x/=ĖB /7ludrݽ hL/aI]_Y,'q=qc3-6gX#S+6$P@&LZ.y=MTb,rw1; ŶwIUr*FO<Ǿ[02;dlݴ,q#6|w(^33 [{dO)Qv]8DQ\,Z~wD#*-BN$oAe #Ŗz>/ĵu?gzFbM@yLh""E!Q$J`"z5y[<NXX@ (n/K xh3KVS:+{kzd|& [F.xtWml (y#>>G_!}^SX4F?ugk0>RvpJ "ydA#zFKTK9I;T`sϝoފXpƋ8hYQ<z[Foxx G߿$5û3ugk>b˻!᰷i98PvK. KEt֤u*IXt5H$<='^[Ns^NOgO!  MUQ]DglMGU8?ie{^S8a}H`ı>/PI>=etMtvpxlúji7r(-}-`OY5%9!a <>U񓛜gIx898& #ZBXZ;޽6) B~l#շUєӜ+4(ۯH({Tl<>#>4l8Ig{yLY޾b&!gkQ~+<g]j8wh:s) i (`}1C)A^6M-6"&g.rY w/"l*8 ԜNPJ$(X̛r %ohQ<}` FmYEIHllzE )*py,ͱu8~wT+{NJEW=xLɈG YpVku%xva{38h` W O9D#ap "ZWMjt*f%;S>>]^gUtJ<-h E<2vˈ^WOL"4#MQN>nw@fiq%"%^$ Fwc|n^7YH <eq`ۏs"oաQr %ٗ9>3hȢp~>Jm eݧ̡uҕpwx)M)"?rIX# go(-ko Dy&dCXK0#RvԔcϬlI>UXj!C< I1QFs3Yw_ksysx RuKU+hianN}d>&p ]9f}Zk!" j47`Cis}+9.}?3k3OS`l(b6˃YtM4G-buTr &>I9&Z(l` g"&]9W#PJ`XR5+bڳ $JRaNyQ7LpS x.wrf};<3-3ր05oc|KEyl+>麕֒Y^Tb%axtBczFXy91OKQ^RCROc 4,(I8M@Y 2Ws#rM`*E#ť%zv Ђdd cΓ ײD *f>ȼE <V qڌ87@˴}AGWLA6ZBk8! QP7FRWw9`Ӻ vşUxd{rnBjd69y;4ª֊pZ\ݡڛ]6zM1*Ѯ|e" g֫g"];p@)6$Z}SZ0YxW}] 2P7lJF0CWC&47XU~Ux43 ִmQ<ʋmU"%FKdN*W u[ dʹeLk$p<2T;$0TN$u=Wc8Nw+#f v6XZ\Rg ۬{\+ RTɬPv˷Z(h4Xu D6"$„yJI,d5UJb|:-BGC)!c6ΖXo25Q,9Fm%6JƁf ׶V/A4"1DQ|U3~OaO6>"yXeBkhW9ݦx-]KjvT*L!V4x`c4_ID=Lkhà iukhZŤKmaubo9㾶,fvirܳ8bv3T13cvXVyCCDTK]E c=}#mo@^qUM^ ZK[Y_e'0TWb֫ _M=@`h.RaKb*l >.4"qg*fG02g|/ت{ZDFA#ln/1aw}IгЇrjb, qXR'!OP؋V VN#E%In)cbu@+?NdE>K2,)6E wAlrW/׻3Hb™A` y\8dt:ד ,꺜?:6Qq{oZg 2v|qO]8rMb:\"<'΂3::0^࿼E{aO{qC g;7F^u3 ca ?|zM];p?b ,a[Y# nS<00CuEAt!;0Pc;#zn_x`mynCyh7б84H"Lΐ} H x}塁Qp>7:~\FA/YżCf'B63QA1 z.˲~U}ɊIYsCA.zh<@"ػbp ķשּׂ)w~1k>$!  4mjcEܹ/-:H֢PhL6Iay'2<@\ *$Owt㻑֬R,\ogHTONO^B:ϒN]?%IR?w f2KeFCmقX/_L)֕OQV1%b `1k2MD8i X %FU빪"ԓ]Հw]mb&b!,oNU[*9֫ *`tS Xv?ۘU Z>CmuXha׋H_ |愚F RNgYY~‘"}TA.G5nu /R i'42; V3v Eptg1^+Oi2Cx4P)U'D>TSjVG 3T m1wh9T5MS874W!,nxЉ@yv7[٠5zn`ޥ"$gDDUc n?ݦM]SƵ1!f& 1Ir[[7Ѐ Da2kg|gS1x)ۢDb'E-Axg +l#&Qݨ8 ]q3X1FqaDD.Yex(lz F.qY/Wea;`۴j;wE0΃6-D/ x [2wY[y䯗v-;qYi]bCŅ;JbłɃT^lP| B7u򲶕=<x].c9/Avھn%h P3Ilc;4sY]~ObTKr<Ϟ"SϹt{P*WAA)i6Iq_ȫ*}5::[lO4BGlC.uST[ e[ʶ'=PHBuB O#?P:z=Pz[oA=I:<[nAh= Ih:F=葄FK B_j2+fDxcP{M|,lV<Oũt(E]Vh:K#*rA C'Zc wLKa˼hjS&#{*EMLn~AϤ >6wG Dg|5|ՇydA^#s- U N8 u(ٔaPϽ~"9z1Id~\apВg "yUC|n]@v- `q@\,3ʊɇ6NΛo;ٸA yg7kųdr9,ΗЖK4O0*a `FПY_lK=Naa׃BTr& NjdWM&KA?Ko@Cǥmpz*Is^@ݱqNb* }0Hv  RA"@ OBp'x^hFBwL  f6L$IL}烈zXegciLajࡱA)h/DDYnAz/!=`CӡGevTdbZi@ۅ .9`CmV{z$ Y}yOB5{V9,7vsFA x2\6]F)2/`&8=y6{i!Gyؽ*Qe4@Oi#ߕ?̡*?/WNB]ı~x&r !1}E8:(3[nAY"<ڇw#˽JȚ0@_#-XZz[1W3׳TȐrt~/&4 L5"wLLe-/<Eb|j,91 eWLiJ_dPYZA[ v!9pDÚ5PF<`軅+pWZPY&Zٕ> Na 3l>.ϋEdAy[Hׯ%zz(RjDLnU|&V}XDA]KUt9ǥ3PA41y/Vnѫ4Gs'I.!;Het\o3?LP(RD o}(yxÞ۳Mj1yg%:}P˧Gc^(Q<I$5|GUc(ۻX~.'{i @kG @]C=u ?V"tGU.NAWLM*O)BZѥT-^n᪬ڲ0;2 r$eLƐ@|^!r#(-$o#H}!Ƶtȟ:O+#]N ` '+fWIa0*4NͰ =MF<]h33H 1$4LA CFN5ƵYTHH@c~אL%`@r"ݎw-:`u>nC{\56oM(Cĭ#e"R"JWEk]#tU-Xoi-B\ 1&AGE>1~@l7CeQԣkiB_FE2=%j`We%;Q=k-gx^fv[dSie͌DZf)#OV:U<46Mߋ$z;L|w6>"^\hvma&CѐadcAd \01||@0h¼w96З-wZΊe8m`v/d89hbi~K1bR;m X|櫳s>I7 d|I QsQB휈ZbfS;oK 9v{n%g;aK:dbgGe2:xcU1Ξ'QL k9n2W ok<$gL64gOͦb ~)Cf:Q3oNH%J=d#hcSM9tC:1H8aС-QXtec9;16E_)JOB]Tyv&쪬&=;iPm%7b6C^qn3j?l{+(,O*m`wU˙A(5k*DJ03@U/{XʲQ(&iH_-r^+e2l!8z6#C\bM{g\yBqAc쟳3- wQ{Hz-ř' DS\"OJ' OncTKޕ,$_){b_ʬOsӘY$D` )#لZr5ʣ-JṘASPNXOi~< 6ar跮O{ʝ= (0zՏɛ#9yAcg%gRSnhl[ Q}uNPz^rↇ0|Op&7(?~s!-)Km,*-g~?zMHb'A n{r4hyEm_g#p4R«fƴ%-8"4T`[F`qA+"蛵>ʬ Ώ3,Y:岍'F #{@-$ xn@\T57]Yz[K,1F5^6IpQˎQ(Y^W(<` g\Pd7=O1Kf,z$Ohߘi2Dݯ *{γKZv1t)({Vz(g9'AV3֫8Gh /L%Zc)\)?*q#B~qw$8S GFGaY3pBYH:Lf(88_`=j?MN2A|}ᡶlb%%0R ,$bq#uJv_yVpTb֑U)ru2Δ`$]vȝPt$J\pt81i$&ɱZ!B$$b$iDZ=֔D1Y*p5­)iYn5F {p;s_䑊F%6T0^FhmA"Fqڱ%2 +D yIM+O8s֢ZMH b${i{\mX[)}*}Ņp`VU >2B?bİn7I0?pB|efr \BDC~;O#;؄Iٌ &H>;lD08:מ+M5?t#Ӈ- zE.7nK^)с' sSyYHؿ3QgpA ݬ7)]>ָ"U&3%ۦ7*)l#و>Be 6Qp{0?Q1ʷ좘fߊ+ }a}0sT]Tڬٯ6)Fo]͸1K&Y3p4 -h * G;e^ߕoΉp4;t16@_!ս\Sa8Z5F= zG:3}cƟܪ_~ p=[ /jx&Qčh9#7ܟ; RVCa<|]Trr1 i<2YHHc,c|ֵ}Q3mTk x~bj4FP`"u{ 7JEHch}*ӜtlN,j` M]۞*1[)p 7?A;gvf>[U;_埮͑Ҵrѵ|0g Q|0C"7h do89͒˧2: k,F ۭnzvW@վ8#.ƴ/^1Aqy-U|ѫ:vUh5zG͕38 k};6מyq㸓w4Q ˆq^e5Veg~[pXLփg]wJ<`6cgX]JʳxBpheB&1LZm.ndM,Ana>¼zw&kx[jjN A2 rՊQJ0bκfp!Wf4Mw !;Cr"ay.gj1iu[Gi8=͹ޭ},z^h4q)5M]3,5%8!$5:S}?sțY+tƹMoP%-04vQu٢Li꽆E8x oN=̶y ,\TX쟣 8?x$\;niϤ<ru)F(bQ61Q:/g Oj[VJn$v2eFNF8QjQ#U_$ ^`zmv H=?$SJ1XB86{[e[tVOpX0p"/E j%ᵙAV2q7z?Mtb-Ž'7ܓਖ;EY3hۑēE ʘڋ$[G>k&њ!6&Q'lkk2if}SJ(,4&R2s ԄD :>xyrT(y{B,Ym7`3]Iw[^ )o%v<"v|ZղثG:]Tol0!o^nIpݹ~%>2Ɨ8>ޮIptpDN5Y!&ǂ3ew u4X*Kdwy1E͕͘0etލgx8'ԙfK | ȷ^IO9z3-ef9Z"Sc+.|̵9SP .ESubDFeOvuafL˚9})TnQ6w0;[$P&+y_RmQ2v #2 !Eo:NFs7;z 3Ek_G5xZz;WE)u׶$V`jV-LNFIp"?d xͺgy֍[k`h.f\XSltzpmITRU+9B$I̗O*مOmу7 Q yhmբ*o}XJ.i]БĎ#!F =Sh2L$ J>v Bev_ސtZ6lEtf$sLho쎤nS Nn&(t{e w&iо+*+-\23{y|UP?!Q(?'0%['Lm^'0} FVeg|-qqe{vv8Lg|qlEvI?z3"qGu_ٖ:qE?ľO࢚n?`7=؎^Z9,|eQd癍c4 {:=bu$RK$ITwC ZeUKg yyq3G "5Dhun'(]*W0]Ht=Zbz:Gg8=dž/G~?ɬwF'ߢ1y { 5$4W~nk?9~x}t' .xW%&/0Lo.=@Vt "h*wN(G`+\thWM/f zqIW$.I;: E٭>6#)kn0#^leuEp|[k0n.;ar8+4(7,?pO?}7Cgr3;Œ]9{sB/7]{){9k9KԘ̙d@ZBUjkr$Q 314τ[X&e}AEF$<ZI=D6xk[Pq%—k$|ac+/m57CRYI8o+1!4i1IsXW߰i6pd`M҆f0-Hشm84u8nU%tB(CfiD5H$ 6$^/P61:r.fYg&xaf]Wz;%Q'\sxWOf e%͜W3:oe_{!mo^`Dil rm08gV+%M$5 tS/c7o5o00H⑻Cp 9## ;* p O+kV릸7w9,Wft/lqtVZ ;; m_ۯTw`M0R6Aq-=Ƙ}8H^yFriR[-+hkV7N`n6?ރlqn68A c=\lv6+/ɺ?=&dltYCq%x߱^Gc4N=H'lw*m6v 4 05"De d R5zHu 1S|z˦a^&'\/dHσs]{ %߫6|Na;l=GSOFpO;hG>Gc#8b9J'#\*W>d#NЮZoPx!4[譟<,rE[Gv }}~;?\6r;0?/:ܗl~)O~D(17H鶩9zϑo{ e9hNx>ںh/blW O$j }$*?ogdhnT m~zߓVeސӬ ףO' D^yedS~`5.yvo&ٽĕ6eVaؽ`]=BZ17<=#b'!\c[;N ÈA =Ðq *z(K>;w;0LO~ࡼ9=wRrErl!d$H ,R 8um>vqpp:}xhdݎҩd]O HOLR٬7&:ڝ %*=f[Q\ 98!0,w~ÿ́Kf4S"96tϰkF͎iFA{BfA3a SH!wiG0̍ "^Y3vTK3v~(XG|Yzv`¤سp]]0<صX` &jN޸+uc1rLo;º՚u;w0EޘܘWۘ7*ާv!Hv@`ډҔNɽH('YڒiLu\ 9w.CJ´ݨxQϏ•׉6^oO_:W,Hwd14 Y!k;nיpj.'V'p EEd[Pj΍c1-^Ow(q*`(^=,YZG/랹.̄8]7s`q/,Wp` X) VF4YV@Bg;^dt3~/}<8탳p]m?}?xG탿w|םe#|k?FswËAoO.: `#tqOmD¥rԔQ ;:d)Ôy4 >aA Aώ[?qF໤'Ѯx>{AQj3 7ߝA|w q1NuRRN*4{@n4+ 9e^ H2ɩñThW;j]Tbg'Z\.0by1"aX?:JrHXSF@h8o\~!& (_sx.8W4-}`0̾>ߴK.[Y7*C | A+O,~[7/7 [#;瀃䞰s%5s:+Id:յZwyP:6FénZBl LO_}m59^N%LI9Bāx#q̰Q&mx*F@G3&1,@5rO9.i:Ve:Q:dsqح<h<g&ƲLΨw\g)$SJj0?e̱bgGe -%aTr[Q`*ۊ2ޔܖZLCmp8E5Ҳf|\1VI kͱVZ`,m7Ik6H*~]-. i1_QC$ϝH!\{ 3 =q^ :6@ɷ><)5T $hl1p@,*d1&jh$‱E(+K-RRB0'74wp.@9 iU-,r,Zّx~j!(RkV>RR%ŴQTީܞ X Y*GHr(7ez߆Xœ@G/%՞ /K+, CT;n,Gb.Ix5Kmxkl1.Z#۝ӳ z((Yo^71!j/Vi?]3oTrYB*TKI9p=_NGŁa(Xl"&nXAFK9hL\,7hLZvrEZbSQq*(CY#DD**ysZVcyzEHb-P([rPWk T>O;{o[_J0+T0Ĩ!ZZ42X*# #9Pȝ`d'+YV7H,*PCb+B R8vGXsEi`M.kIؗ\h01-qNR@Kpp{sk͹Rk.LH\(0q7Rsw^($!r( `,#N`c KlJ6#ikBV Zښlx֌\EU/gl#jCE6xȵbب!Y,[x;:$i#(} htˉ8=2nݦuiLvPRkNH7!S#;Fn/kLʟؒZ`^}P2Z.p):&dDכȕOV`<##?UoM=2x,ի%1ᗲ_ej-O*a{8·V>X]p? >dj GYgl~GQzR¥&9{},{Lo3XDx_ðU:YZ^tO 8.AS/VL< n-F]Ѧ+]21RmF5"U%REjzk*19oRGU(}^4luBRGluB)Ҍ0a>bqJ~vF*X-r.)d _1i {-M0ohpR1'Щڅ<@gYڦ;k;w/!ң&NT\ =FA18$ U+;EhK6`uUͧAc_ >7eU)-@j˖F[e Xss4ɕuj$'\@-{R۾ٳAo>JT~>҉tUn qr1'ցr'/Oj5WoZ=9ϗ'ef0Y>iF/@=i4G7 \F +$3^WQC EI͜փz;!}K.y4B$!/_Hv~77ޛ~3=~Ir\rP Àjb&zq7J*z|p2 m҄-.cK] (=ͦobWd= x2'M8Z?Ot O'hbLTraK9D ؓ4oS(b\k7UI-+O&ΰdJ./$o^J͙{nim0PQ:&1rphr΢&nT{Vuw8PξZP9`.=ۮGR۰=)VpIcT꣑$WNnLD*_X_#lPk0kP,n4 9J=f-3P#3BKǣMhZJ+{lc󡊿BUNEYޟ_WŢB ijaSEX,#a%I~җ2X(A\ y+j;'M$A s5Rq&dRL5iH|3rUYšPˌT{/+9ɋj}D3&F\ޭMv!є@DY[ lPxwp ^ŎD!kQ]({} _Q⃓vm CőÓNx FhNsΫ"0Y(@|V {`Gx[?SH!P22O1E4_!.ax+|r_6cE)APl?{jNlc7wMWjT2 \g-\\5dUk3=R)̵wK5e7{>gY",IRU[b-*.U'azLP'ԫwKl'G,153q8̒Do0{Mϋ elb'Z,x9 å%I2ň۝8FKoᖾh"ŘX^Q[q8^vec1"pGcHu<eEܤCBÊ _`27@F!o)+[_Z۽¿*o┝Sq)lO\^fB/oݻ(Qm:RrO.JWYQlTVM.VG:%|pW^W gǒ2l/"-*_Exэz|]hX~:;&}o"b@Fַg]KwÕHZI5,5V2w|{u7+<sňQ4,lA UI4dR}9O[o#^I߈u/2VltjPzJF' 1YwiR)|?ZniA|fp9`J>_84of%P5paZ͘j-(HU2y-ƲZ։:L<@%߷e@kj}g0ִS{1QQč4!"N%ьN %" D$Q1թcQlg#g߁4߫*Qߕ_>JynP ,>9͘+ ֊it\&L(\Ø(T0 9H@:s rL'ǀuWI 2 S֜I-$C>bQF`4,uZb959ꩁ7ڌ]V6w΁.9 `۵O8v{,ߋhWkȑ/R7 E5ϪbOۃ& _16`gSl)ƅ`Xm1#BaLJHӜg[ YGɳm`fM>E䶄,9,8PgNyA.wNk<6Rח޵47n$ie=86|؝f'MD"[E$Rh8oV'2 YY(YjijgRϫ|I?7c_hN?qRSE TM&|ꎳ-X?؛ifl)P׉rݦ;s9|X|g4fo?~]\S;~?7tzliv{jϵqXs߬~f\i~8$?}zu o$Ek|&@d08(q́i"s=|4O?ɿ]u< mLYg խ$Քoi8K 2I#qc!AonI`%Á n"d[36/Osoa[V NR$BXgUTh!GB" өr0Z+d _R"^k&390NR׆u0~zv ۥ$#z%%PB"g<߁G>jlw+f-T̛uw7O-~8VKqȜ-qp"3S2"cXtǑ|B 2C/ K&5kYZG63Gl1OvővBUy<~S \qI,.ړG1Ow`ƑY27RHI\%> A>ZR.' .(霒ڍi[->͓3X .BtܺB4t<ׁG>Z=)s5hЃ%OU' cܛBN<G>hfJ]KZ(n&u&i Q%2,"ZJ Vi"nUX4tvrߦ%gLJ\S|##:G;rZ.o˟} X=eOclwu T.?4zcBwQq#8o[ټ|nuR_AY5}o> L#sa0B-W#i~W_;c$΃(7w؃CS4[ ľ $~W/}sK[ [;V洟ʸϧD}ҀOU h{JPL<2Q+;-geH㧄p#?%v<*X[ c΋yeZ1m# \$#PB^(Bnٴ(h.bR=ۆ<<$QQ"% sp Jњ ק xt>( 99LIpdo9XB*1eJnE> P|Q%ZOm^Õ[G@7GoXt[z ֹUĖ^pP^P=]no1&J\RAS M̹@H"e:jT6QKNa<݁G>f6ctҿ꽁% (Mo gVBYmVFqذ-Ҡ~|ɖ4CXC5 (+b 3lLDQŴr}[x[" a3}eKx4wvCasÀ{6m$O3 W%)Yj#'FGB.Rqy@oѢCbZ#]8j5!,Gr<;'m{`m{"7 8hPT.ZRRpVq#MF&fBnh03pJqMj *'f`{`x(N@o!uE{ .)w'IfK#FF{"||׸O|/Fnq=*G-K .kuY$ɾ`X|雟̍o{d,Z(3xfq ^E^C'2I$++S!*kKNqr^Dz"Xt]Oh|3C2{JGUNVe| vQzﲝR0}wYB7M}\\$ѐૐ<%bDДGyIzdGGG9ͧo}|4 ox4(T -" 2+NY8XM2ҿ+&(nf.hdһ\/Юo^ȿi:c|2`ډ\%D ,R h <Յ(qNQe(:T Qt 8ЌiA}jyA*TPMմ5en9lƉIܸWۨ́ 䮙|ܩ^k e1s- V'  @Hm2GFE>r7)l!GukꑱfMO|w{_$ԆL'(0 #ʇ$@\>~X>ߩIД /H{]f4-g"2Z뜣B>a/+ල^d{|}|ymdıj#ctIl eJhq`DzZ3`ƗPcѝp4}aUG/h9+v=3t׆QzЖ#|doo.w rs7E-;MްZgi9Qzo_i z}>1_sr` KQ=[ jyoㇵ鐭ܨw6f=2ܳU[=`}vz4vYJ}~{d,9wT.sbЂ09-IJ2HaC>quq\A^-Tg#nʡF>Pmz#;0<-!o|d'TNg墠L˳ f1`@Iа- j2Bܦg桮@uBuܽx_-`:+j3N^^٢vilC|@57 ,-m𣝂/䌗xq~ƐQ`vi0cq u3f!JyAd ;-ge}Z Aig:b=l׸FØ]fΟKi؄.|֓݇k|TK M7Mۺ}x}QUO<\kS3^~@-tQMILa4O!'A5G# 9|,B)i;}u`ɩ%LĄ1| P`$4uhJ֛b)c÷M AQ`˻rXj^Gq5yf3T94ˢ0OXn7βuhӷﶪlE> STo}|Vېh+D^* 6C<6"W망Nx054Ϟ d}-Fɝk\%No`k XL 8>-))8+yV#j؏EɯmqIϗ\=k;;_z(|0X+([ycp\DTٛt9[}XAi}]|0>L!LwYqr7Y6)~K҂뺍jޒvŽa/,I;Dy|4^9J}qɣr^40W@ak"[7JH /U =2\bYN!$*dX`26>,y#c,&|6_> \#ٕBBH|:k E^B2;R+r@?qȿb̗ i")r]n;쇻,|E{l߯(ٲmKmK-LYTEU*&)ωԔd4,Ͻ0V0RۋBTNhO3k'`q"/HM2ꥍ'[>Fhc,ohǨJ4&\o\Ly3\{}3RC77#nF؛z/q3 1s_ wEKh=%Ra2]XkirΝ0VE@ i_ne=-/<;eD =Ưw%Y݅pVVBLhg} `3ޔ#<3;hLzfu^/Dk2O)S*#Qami4,Gh~&s&GoZnj&s1^VNHfA2h?çi2AaŖFc p,WAB];tM>jTw#5c# ٦D?>wUi81?syt:K'uJ:r56ZX\/͗vf v8i>7"5*Ę*-Kz`>ްv(3llW-hN P!xb S朴U/݁ZԾ(DLe?qQ8EclBp>k~&ϩf'Rp^B34IoDѣ=!!`т8jkft h8(sYĘǤ+G.C:G 0=ǔz"~{o0 ؈Y/co3 YHbmOs+IoWޚth peIuUej B:Vp XXY'%]&wXb؀8wֿu0Pv ~N{2TQa 6|[g0uv53<ú +O7\oޭ|}lP;aYKf L5}ⷺĠ9tD% L"WGM4\vv. LglrPP;MɲTC=V3Tj h1(j j#p/5h%D41K3!TY#gBQlR2>A3X =uju]8b*D`ƪܔ0 SYQаI`ҘTK5rS0%mO-h|= FsoZx*P L(A݉]MI&cWǍ+(K c O lvslyg˛K+CI=@$%gDBIxIK(f(~N*_#)c>.z 1@L.:)GÅ=qܤެ>)Pdx[nO|uA<ٝT}qC;u r`lhnR m 1gG -: ";I<ьG:h|t z@8LگZ2\ $] 6ǏR ? Eɩ)ܧnϼC;8ABR~,7t+k#E1__z4fe!"(Jl28XJa-VY.oN-`W£^ǜS/ 6ޢk.k0] 5\|OŦ1,s2J/Μ`E>3]^2_z0Ac`p~qTA0=SgN}Dn cTwsD:9%fEełvDuz:yvCƗ ߰o5J] ѝ Se7uPAE 7eRrsntN|c&4#"q1SD* y7S$9!<V )\;v,0qqZ9A͖3?F%u<: 2e\xv)\?R2D%_Qg-BeWnztDeEv9|ՉWJP 63f1TPmV^C(`^zVk.5R"UD)b*(Ch (^#Q0V0FjbTGjlՈw U=ZkJH`pUV۫YaA׳Y6 =nf0yRH`n3N"6K$LraֳSo~{f^Sr)2 5##_癤vzZW^˿m`RLRWvGHco!mL#`TUg4(IMAjފC aNW6DH5x?oe %-ݧ|襅 <*f#+N:(ym$_*/$`SVSN`oC-Gň@^OV;sAooazƋT蒡O3w%Z5Aoo`B|('7;۝ x;oUZZU P>kw <rxU5 _!&m=[Yb5Dcz/"ѼՓ~XKZ*~w#N#Fp0F(qEz//+sz{]0^bEfyE :h9*D10^Bұji qCltHB,u,w CwJnD`o.=>Rg-Aie$S%A=1)f=?kbh4COL239=#w ƽOyU nS~[ױOKg}e^fwq XXuP q¾=M:Y\}U4lov~zO?U7Փy#y2UL^'R"h4g%߾Ik{L00,gW=\Mzm/ov?]z=3Mo1>6X~XD>] nRvgSE'1._b`ngbDƲ z`NB[v׃BK?6hzNVo0YYU.ϣ,d\3ɹ9R`<)}T {9鼽BYEHBpwx z]\7&X9 )olP67Yn0FyR,H1pp0}IRZ{-mknEJ_q[ 7%̍MV`~>W/S?'Uoͼjϵ H XNxF2P7\4_)SaEN%9>Ԡy@^z؏ !r9TR<8#Ee.@LPYRWWF,5T~KmrvQ$$ٷtf#- zŌ"~֊iC#cͪޒ Qf-Iu"[4lK915=DV?V' &Fvg"Ƽ 1]CF?!7KL,zΥdB4sŃ_ŗOD>' mL|B =H[6F9H7 =w$~zrc gh<؃z>}-uuaЌ$a=`Ͳi Z/zX6po\^tz.]U@umf`bZlrԺ_KŨc}4B +bm8NşaѮy(&j`HFs ͹ P!qn 8 ˒8 #G%q- rn+z#yKq-!04̖d׻AIe;m,4.De1j$hD@;c`672cʘ{=?2{$ s8YC]{15U+Y ortz կ6jP]ZQ/q}BQDF`>Z굪SU `QuEX|/W"G׶+) zp?'󴼛&tdqgB2.X>x})iZfQ&Ϗ駶b7;N\әi7re/hrz+:]f2shFX- کlpcKPdS(F%PCW=NUuk'`Ή!>p~i^-}>ŰO<ڍE}A=tHQ!h*jF:,& q-6ц}|RUzCCU1T?kЌ5xdޖAgꝪUd)TK+tmtͧI<q=O-uSvq$+Jԫc<=oܖz'a6F8}w,/d4q~%,F";䬣;nYq=>^H2ѐoT6ZJBP$Ni)Ȉ1!U4[ ^/o'5AfFªf{)NmUPGcWIl*fhҶbq"r)ClRTHte&:FK1u:clc(z->̞ˉ_l1OhiqCɜʤXYq06$D16O9W6i6Oq< xʣ@G 0`'N@ }뿏ڳ Z},^g jj9r iԯ}lr a|x/Vi4kEeZ1wAl"&$4;c:OZ#Q!R;tʨy:kg$/Rgd6Y](~S0? sHlgi/پ{pH8_vM,a&7}xӡw*lS_g+u7jJw \njtq6ô-w,T-4izI@ \ p wVF*:2,s/v'/OxQo73g81׶ ^K0_'M1ac萦t%ݙNYA5^Y$p)Ilby}{Z͜ DNfZI/N.h<('u:ϯ*䲄`JM4}驚މ&;48 G^kjE4[ŷNO qԼ짧2Vk?.=yehzu8=힞;T͛DˏԘߝպ8 lhU>Zс9vuMЎ{y)<,SOcSNɚ"{p\ƆzSa6087ƒzs渟%ӛ{ TlwFQdMJȹ|9'8?s[uB`.B ݂-lASoO hC]o$Z+fH@ʒ̵H k ,dYI@hS2299:OAB ?](re\tf4խacTpZ1?%oP8/fq<"7S>|ilfU>%U%W8$#x4p0r,Kѻ7\%!bʹ@$0[*z'@r/7Rodyz,Pݥecr*#yZX|gECV(ݙ}|ejws+&-E\b a&LN&ܖ #n"EiD/C롹<4*<nz0cJrsK;q=קџӼ t]\J7Uc>8 x s`:GY qTv_y+٨ !@;i!W4`nI>_+ij7Tq_\~ڃɖzCɖICqaP#.w|]42Xj5RJ]mKӃR>k~&3,jlsE"Ԝʪ5kJҠ8/SN}ʵ S^!ćP_~WIJlJ"uy#( 7bbKjǛm&wECۚ}sQQ0I|zRG7S1CN"Z)h+;2sW1GA/͵(-P:J9_g7p\34}xƘ #\#$dxYxy̖E-ø5e[`rl~ z> 5 I!4 ,>*" cDZ`6ҪR4. ꨗoQKT(ElRX>1i㹳|t(C¨Q ʐ  0VGKz;%xdC\20Ylì QK K '՚|[F-YɼtM˯N]WIJ5}͗f_WsQrByV$5VfCYesKY%`̻N}bsY%ɓzAK*|G$ʭILOT)JSդi-OpV#bVuL'wM0w~lTZeGTͺY!_ZI:&\+i]rrtB">HeIZ ɖ:?V{ (xaN"I+ɪhuY$H7ҶrDݏ1yID'N1P%Y֛RGѨ$6l0qf?vS4- \aDvW:|^,W@|L9?ụڂ)}9jlÅ?SDrQleDXUe1.SQXc*[8jː"f?X|}y=W:[eMl!Pړ=^Y{QbK}+tGWwq;#GBM~"4z<κu?`  |J8fc8g,fM %wQn%jJDbLҤ26Kbc *mcDr\*cTWW 5Ì sAc 5 Uτİj( hA>DkA*I(IEm%9C.ۗJ NyfBc+eUˉrAu6`.B|Hz\$W@mW㛾&%$eJy6./p?:}Ο}T֖Uv^yME'O=w -K*JbCJ^jN F0JyQȈ;u%!Z˚mTن\?<qYE 1|*ڍ&-uug0c;%xu4 zߥHRK#O~>yYkijK:F]r%ZF)/v6ET#Izk޹:f>jK'd8Asec) p61X+=gI6︐8%1XizF?B9=_@ g\E#7+v˯%^(4e~ramUҘL'ɑbt!uS捯.-)++ {ɾN Bwba{]iSL-g%8hZ]lEX-d2V;BF0!7y(!tƀ̹V/nd83۱q?rwKn1>-hDHbV͸NjyrJ?{ZnјM/ߏŬ&nL$֕ܒl{S%[tl܉%9WVi`(ܓL ,F +}j?愁1hNo.[tȝJ LJ`94 >&Mf6_n}mcX{_?^Go&e\4 |+;>WgeK0J I4;]ε7v$# GOoWv:^R6?M(Q9{>3?( <}|-3m:Ң"IM.N;i_<'^<}cs%W+>ЈMoY}Kco>Mwˍ|$p<%4F}c; $1F>R4xէG_?f:񋞋R>1Mr(PBR0N[)ԖѮ- n#ZOww ,g[ r>nkӯMGtnu.:)r-,Ap8<6 gP:ϩSQwe\qϋ_禠̛]X [9'a 'Q)AitcuJOwe_67QFɕ0:z&|BI ,pn ̙C=ZEŔ^?OT5[(NjI_ϳA%^XN9dibe>Yߞ~c濣W?둞>ם}^;Uf@Uؽ}],{Qݾ6`?~jC)/s9wTq0RQe̊h(Ⓢs ->xo>+g-Ӛ `1$1_$@J?Qvo̬~w(?@Vբ} ߗscD,y#RmRp^sDKx# :®Sx>,sퟟ~uvݽ"xLHꅇȻg()O.wi:oK?;W8>@A>N;iME3Ǥed*Rjs%=6ZnR 'R:);;&Y UC ukblWXP:8SQ2_aTdm0$QXOyZ/ >GcL9%iS;C"H,'%4H}HJ =8?cq.מ,d`L)?Swo|gsc;F% ;I?rs[dzVvԽ  v m?&ׯYvQ~s@ߢ| _m6n9Gg_So{!'`#})݁Y!CNMx0 Ý>gvb7JS>LY9OXs\gK[$\hoTvhGB\ά)w5Adޙ `63fy?jY%LifzL@dew(Ϋ?o? g[̛۫ 8i!Pǩ-ڣn&ಣ~3h Z$ƫr_g!ރx71qI",$SX%e@2hupm9asp*l-9Gspv z,²\'$RÈƔcldT-ȏgwGYE3A"ZµH k.X6~alpvX$a p%2~xs<_2#{&Эi1I& #- Zj<GR aאBxG:eOݡIm:t;1 _Ĉ9bQQ9xт1ӈaX]H)Ҧ]V-<3)fjm)'○1{5i)Ao9<"1OrA/c6~xoWE V,1JVF"P|ґ9Ცdrf?8IV~@O_ K`|V*Ha)j؍PF)u(Qm`nB,,&T'/zi1kkQ0t|fS $mD5F_M$.˝xEvm B)&ᘡ)X`se cRK92~c\4BO>pYs7[*Ym:嬷/Ni`n9LiX>›D۳%%PϣLJC`i Tp9ƄmT v؍MԂD-򃏠wz 7!T@܃iz` (p(l9F:,] "aH 6j(hY]o~) :bRS2wsfndGcb Ad*41D([ABL]"> 1{1"N,gF)8s-1o[=T=ۿ1.~JH,Q2S! 3iiRc@:TgȀj n)Z JoIN{cy<H#ҳKHۀhۤ~, /KiݳVLIw?T2 gq;|1Ƥ q9!R4 18uܷ6NQnaS!6MBPJT4F scy7/7vč(Vx=<s'Ia/lh@F[p T1Bop8NƓHsP"y@AFFcpH%v[lQcX-,"aV>pG1 18O[wIE2 `9%5c%Di:0µ'\n2]"2r /F,vD9t0tc4 w9Su3, pp"OQ'O#=P04Q.HjAVFcpOίFT{(I/{TX+cy(%h(\ @q=A A;6A7Sn㵪3? 18N\`'\hWxmWc31n KV}А|);Y1DՈլS$4)44]`uYV|j-]{J?WrRQ~?p2- p<Гj4ܘ h jw{ =phǘoL̡u#-8-q*u̯\**V,t,LH^SAKل]JTKXEYTUp,&Mfvݒ(e+iQ1+D 1GHjgFd" PuA0wuS< 18%xEP5PL pM TS.NQJlaRTk tں.yQ8À nO[Y&j||#~joע0#eDq2:851" @$5, SΠT=Nӿ[{Txb7 -5s^E1֞8ZC 7qP|+dca Y7U C@ 18Ulo19)$>>,1>>ޘ`/w!̷ˈ8Ed4B~:anY9f;hgR6v-(UH~'iK'\!e  tD{2DJkx5xc8~XňǜKɄ2Hi1@gs$,H-Q&`uXd#-8#{X2SgzFT8)ζWnbdӉ9x"7'eo JvSgt/ztiPTzƸOJȀhsDn>0"] U\6TSK%2Cd·sƴ=Ohޣ3mP[Qq6Z!oo%07dX3\{ R  E.a^;+XP-&/sM4_O2<*-Z _UwQ_n*ta>_QU$6"΍Tyhh%?Y)\WwkT߁IZ!xl{=/W0_8wmY4v>Y;3009;A6a)UjR"[+AWWʉކIk⎊PUϏ⢑s3qm~7gi~?m\ |GAKQIiZ2:)l ,AZT,ּ?0nhcCf oMh95 eb<(TAf8"M^.[ub؋ᬝ?oԓ7͛7e5DD͌Wȯ:ҏa/oL3@moܢPq ~1R3v8-~HX>ǣ a b@WM e@F ·?_FwNtn~?_MVTLgvS8fX_mvk)h@C>6b^r!>5fdz E%':.F30RCfe87U\pJG)a.zp~nN?^t#L5D O<[Ffl6uFeRc 5F #3Lv*;ni13 @bMu6Ar0| 4ս⩖yO;dr^`"Od 9Fu4,&X-z$ed he@r1@t= BO 3 jT(eS ߂>c ]91ǀw@D*L2j,gf,2?\Rp44YV YivFzA8zpA uo웙r^ګS4Z Gjt|k2fԘY@`t@VxWҥVpb#VUjiN,Z|ߋY҃f Ӭ`B[PRB SX78џӹk&"i 7rM鿎fxM'b?ͩ6lgwyQ=; CobJ @ %soH8-V`Кܦ@'2gn)N+_P+T J};iaaRx^U `(X VݸDa+S_h8l5w(T>>HT#5JH?'X}?blӱb8:١P,IpȰҟF_(.u@0f cA`ĔVtia&ݱkx.n`JKҸO_AB3Ǿ0>8~ N{ɱH(򥖜NAf{f1dr+D:.9浐g{Ə)i`cb-mlńu oV%m9>5ۗotS"U~䓎Ԭ >@+!bQnӾv'Qk!HzB8A݃/R. OM(Ij[7=zx^J@oIn6m5:pI6kTy87UڲH EyѹmĈ0i[_);5z }@RE_fBSAF3Rߢ^o1FT%D V> VN>Peڞӕ`{_X؞.Ud{F GjN%ӌٟmj~28kI৪Vp3CO9?^~wa4P%aZ?^<:=oʃ0ȖXR?p2}Qvi ޼% o g >A:"U3]^%vQt8r1?!orHM.^3EŞE NQ"{%1HB$r4 Sa$M?9 ၎RFtifRtNXL28A+Cpz^IJ#)xtF $S: Pq;nN! [ /5^E{Ia.-j.G24r M1:Z !8U䭴GA)1 1gj i¡ewMa Mc5C b0@B|ڃtb8X}[M+vM]9M =SC\:H2=O#a m{YsIg =wdH@wdNjrMI}٣-ZZ&eO:A /zhH ID;/!Xlf OL?PK5 F9/&s!'Iurug'ծ A[b!#K24n:@2,zu5;F).g's.g{&ú] A\.;->qOۏw W u#ä #lF 55V\@HS=iI2#}Z o@@6HˋZjPA ,jb{Y\ X]!M0'߄q Yx{p9%\onѿшqse'9q9ss9׈1Fǂ[Å]B3|p/c$R1G Em:2,"1ҢZ)"VD h̷D  lε.kWW 0=a&?{zDhPCpIU!Ls9dY$񡠏ICt$ɒ't44̢@jl/AwTOck@c{o3G˛6W_^!iDd E4ݹ4 Fu e#>_ih0yO%LIy̟$MK0S8D .p Oa"j @žt+zg-'zH9d il}-Z]%<{:Xrǧ*c= 14A0ɵ 7\ 3f4j|sWL#,8 )~`\'3y{6i9˘! ?+7wsyC<#[;E[3ԻD!}'ʷ/_GxW?~T4Ӑކ _?"kG.&asa 0.j3viAEa:G)L1 (\ۺ%t qd@g*҅X 86F .宱F,WVCŮn.E~އŷ/tZQ&wLQu:{TOã!.aQ~A_=35잙-4.hi2ir/ir=kxWi㿔q躼5lֶR݇zdvmc=!-EحLƧel]*$TɂĔ#"ᷠ`E=lxt=xdmW9>Ѩl5_?~yOo~z&W| {_pl89x6~VH/I i7#M%kB9)ÛmNy  Vau€(}rN34 }-vM{Fv6>?G%Z lH!se_3MTwP;@uT!Qd;@uTw~!<! C! ppqRkdpajbRV?'xi+[VYq<.*Co0un e7VNMpv|u;AiK7Tp*ũ$xKC3,kvZq%:xuŅ̾^d1$Qk7y{0yg7mNMo&@NzQMG;2+KϢ_]h:G ؗ/ U/Spw~#+.PT(C ZN8m⽴o8888>{s4ր5R/%*s*A"S2U$ DYgH]?/0>zC}EPBQ&E 1؍&`.vꂁ*eR,r Eس[!xݤF0(y4^ËLe~Vtw5Lkzsu{#^;`sܱ93fDRm x^k[t?6?E>$ *Ɉf|dL>d"N#VHk[(pEasc;|vY/&ƣA4Jyi6"6=`!\<1[iP>7~ SkҐEFjkێ76xLǃ&$Hҭ(~qʎLt :Yp<"F1SjM%v`KyǓѨWn8ȿ4J@H7/>dЄt#[T<&.fpiBԙP1I#i"NTm^p-xsgb4iic0VY(ZnKWDw͟ĖOy|w7fq.Sb>-X1bTiZI4A66Ӽ[ɷ<;ors)Fi ۡ"{.FwRdj.W+J&Mr|Q*ߋognDg gZL{yPs͋ekR/]l])W@gE>GY|c<6G>5tN KxA޾3Du-wp#{V.DPb[e;򩑿^ei,j/qҾƷ͕N֒KKk^`"qgqzӰNa 4}r9^N*~|Sd=7MÎ{wn\tMzь'>/[Up8*N܉;p'Dw]Dp'Dw"N܉;p'Dw"N܉܉;NfDɞEHKhDw"N܉;]=܉܉;p'D:ĥB p'Dw"N܉;p'SyS)GlK{,_]bJof7u2^6oo(W$ibC\P*bVjd1g5jN Whi)8hssiePd Ii4"$vװ9'# B+ > }2\g6t *ӔǫUyTmJ#Q3y [b{u]'Z9ki:7^-N\bT?qػ)+}" [gS4p[j.(ߵ.w,zrky􆝗NTh35VQrPm=xOekS̠3IT]VLU<檞pbDQvѥjS+>ULM5֤NYV= v%Od$$X]OlW#ADGJ<9iʙP))pHXi)3kc"^C>`$l1Q4!gJ0)%Gw^5wg}n:<ŧ٪VO,jX1߃+8(ҵsggV%W3p93L:ʙ^9lCLL8jO ey/L>˜{8΢3 `4z2OuvwOEIr#->M6 xX׏'pگ \y\'%7-Z%??1ߨkn$54c}y13ȭy*n$҆:gb=2And3['k)*AXw^N'#yML`wˢX~AuTJP[nA5>xhQkTDGكd#&_TsQa Y[K0[pUTUt '+#cy_,-Qd+EY/_Xi9dc1I9Se_!rC!6{ /{3p~dA(%E.w Ql3M xwK* >O/>N&c[,>x'$K7xXb|xWos:,ԵIkȜ9CW{5?1DZZ8 7ZԿHc5&?2λcÏxuw u-uX\se"ͿOe}tz ZS)16V*g4uɤJY0InUbaѩLw.+/ ,U>@Am)LEݬ7<@~Yo]4t۷E𞺨?A1,nk+r! Էr~\:їy;0*JΏϚfۭZQaEeV~Urj6$wCK迋u '3`g7gH~axަ| <㢉<ÈIzFOrxʅ7 ~WIC:(GHժEqԹE_#NWѨW4wAl\ca)a κKC[ȝʤ>O#;0'NxL,;ܱ68)*ɬDl\('s#*n'(a-S1*=8>}5q>K>˘9MsXTbC*T.C蠻lԦ:7$[qɌ HjLfdrSD,gժk,XF !O[VVJJxPE次$ kCZnm8+`M)K+eiû 9amxut7,\⼸}S6'$^d[XN)#q{N "I81LJZ:pގ/؃)|߯rմO=EMaV~ds7Iƾbś`fl^Xڂ5-;,oojyt DZ+Bc;2B#NTU۱z1U;5WĠۚݻ܎Xk?> F;^ 󥛈}˨vod =U+s`[fP'Ǩ&7P?p릵ɽ-XNAp1 [[k;Ի;7V8{[9LOgOսQLJz`dU '5n2{aɹh zCK+3zjB^No ~-!ۮ> -^خ(4bXXosn~ή.RVc™?2Br!6Ckuw?Ul{moڨ#x RapOx,) Lph+ަj'>-^n]}Cʾ*hݔ!e 7'^.?U]qA!UrY^}5,vz3ލlL4w+9ICߊd<ډ~[zo{o4USƺʻnJEӽ,ֻ=:}Z/X͆FzbJ<4e5]6>o==_7W_HJG d6GP0˚eTsr礮0ژ Sef ꥬ~y3- 6sL&A|y{W4<]]3VNFyN3a|;^xm49ָX*BJt>'^ +`ш*f6ц"z#3 +o]|Il5;QyA鉋67(*+63SvŠ֑jw9=JPџh~t%b|&#:_眳[ɌɎ2J' B W hLڙ(NiןȋKGnhHAGI: xI9[Dbg* 4l8]՗E no.; Ŵ٪3\ԑ\FBj6_/#NwOWb .UjLNI*? lr9kU& ) fLYީM]cN~{syڿ?-wJi@]:Lo0Mjväon4kYc,]7z'mgғ+vU~q7PAקn9_]z.kbe\l3S(:L)#U9Ot*o0R֏w6UDT}LdzI}ˆGowF񡾃 $BR陿/oozhu "lo5X.s;h #f_ f`rl5;w_׋zO/f+euvVHgd{n(~+)Kÿ5\\?KFA CsF!^e'Ы&biJF< XFXPѤ+"`=oȔ@E8IJTP R [Cbs+Y,l*10 ]A xcmsDe$K!м $5H[`ަ$a -kTeh9+-YbuN>!AxF5oH$ٞR -1AE+Shjici:K咠5ˋ: ~  ^ F[F+@An -;SgI1Y9o i"$.TYtGmxvR h1L xB[*ȼBŲhPh^pZc %2$ِM?BZ7/H(>$A^3+R`u6v~;Whz23qBN,:f`͏Ҳ5xm<{"#|IL8 ~ nF!`59jI]r&Ή#(oiJOQDʼF%N](Z18`{7I e(?B =x;dH͔@-1\kQkcdB7A9+>Ӧ 1ڀ#N3Lݡ,L[ޫC mCn+8>Si!oWm3;8;f듦g=SFB$9Z`%hː+ÜB]苗c('рz W wE)nO_~ ZC8L,*|.)Iz^XPN(P$: p>xDžԭ0$Y"a YL-W\%C M;l ^ HZGV &${"mo@C߲ۉ}~;5_iB|0zNY gymLJڶCm۶cM /oXBtkO267F"`IJ)${ |)>ϥ*?T{/r\'8[WN-߹t_./<G^`DyF ɲ ā=1,f S a"CSGmbfzS,?B m|-q&HvkL&k%#k F0c(y{%+e-m1s1׺&,2H%"68 ~ &zqk+OٯW_U:WRbb&y#+܀ MB{IPV{| aak/p7O\^zomrvg?A_kQ(H"gKGPhQ[<]SB(&`B6CPh^No*qK8fe~mlbBX_sW%_R,\rM>CVT&f[cQU?[I7_Z[\s%9ov?)Բum$HkcNZ 4l%MHCx͍lʘd%HH a K[[CC9[Lz-\ lгdmP4]=dP) ("-K?OIy?B C-4o+!b-z_D1Z'K ŜECB[27 1j?B Z6=,3 @ºB4c(OFbD)#a f!)L*Ph:h>&RM#$șm *CPh">j Qq"gKd#/a%hM ,CIc(OD]Ԑ^E$]]Ce HI6ByP<5V*K%),Qc(4o5~YkARZ $=GK%Ph^(!{%aB(]NКCxi]Ãl& dk;`Ѳ5HaXc(4}n~.)qtAwj`F8dB8r]l-Z.GfEɆ-U(N䁨n 5tFv.~ʾ5489E[^@ J蜓#ҐѹgCӔw;q~n_\$JHXhJrvJňRr͹O.J1D xJr ER<ɔSV#)TZy˲vQ|<%89?}vLd~ܳ )!|DG\eT.;$rkHe7㩳}_։V:C4F)tE! u Z!9oŻ>z k.^,k]GoueTxjke֑mI^;9+8ik,/ 0c(4 $DVE][oG+^698#b>vv Y7B_%m`V/J8"g(=gؗtWUWv~{o޽;͠^m/:yG[.X/W'6e'po)!ZВW _}5ǼxZûWʗIEy+'Џ'(]md#2/L6qm`g=ƁGu=FmQ UZ.B CB*PHZ -Ō! LGe+օK\ƮG72-S>ӷ _F99;\o4.c~_ Op僙\!xEqyy,E9+aVuqݨtY=U@B|4DN~צ 8@1=6@ֆŝNBq)]D/qoõcKM< JFDaRk(c$r\ : V*V甤~Dp@lXOI?dP a 5E쏷D7v Gۛ.C lY=ȼwdC,4h 1AH訔Ι]R&p''AsѢׅ&G,q[tM"A7)EąU L+`ZK ‰dJ5*(.hi=UKB!JVW@2:e(k3\gpq@S'lo 1'dYx}+zZFetrNEsN5Q$ƥ@т0VVC9`kPEѯD A͢1)iz;O \w) rI pr*ŬP%О,4>e*q F3.׳7i~]_\Fag' ;&ZqNy8u \dsu [̟-7()Xdf@y*/u@v=nW7"A /γp:)aIG.B4wlN%_իY F$8`C5..ӕSo7c{0>3fV2۽9,=|fz9mD|8)g)`ܗ0 3r{GJ EmʹDqx{K`Z'9Y'!tyBUaya8*ר\aUpp U+#^7UIi%ϟG㐛z9茞h`ˍV\Ycq//Qߟ߿9w)3o'BJ4c۟wIS]cuMzN:&5cH][KC率/o]u)rݮy?~y^\|j~U9-̛(G2ލ*\r 1hCflNei6@ u:ԣ$I&Adp{$.ඎWI 1t,tm~a3} T  gl> tIi4 | Nf9zOmOoL&VtRK_=mp:Y66v=š Q9E?ܛt$;. 'Gs]$r՛ ( M ?$<8 $be$peg!)vǤ%E:)W+[_'q3$cr)#$R`4V@ MPmryРuBO⚤\'l|9ۓ^r;L&'';sC@%Ngؾf}\ZߓpH+ -0`& HSpTFr.3|ɋRvv8lv/is@d4DJ}4Q4-2E.oLF "@GI`^!*LupL(5^V*e< ,mPSR$*<I$/?awXh$s}}{ ϋ/|Ne;|,>%qSo73zowguŋk57<](=r+h|63C7G7f`tg 9ph!kS29{5@2R.+3&'I֌ɠ@&D|s䂞/ܧMgG+"G{$gr +H>ț)Wȝe3NITrwoӌ1/<[rJ)W.sX2 7?_Q1_f:yY1|NxƹGCCC:/|Ae.VrWuMpEPx~03V>jps8ߔ_ Ͷ_zQHpZ"Ɯ[Ws7~\^2N%ݵ}<+Z Z lAlMM v9؁-6ا4\aU[?jT VΗr̃E k9a9Z7wAZ^r|| >jn_0 -+Z?jTjΗop>ö? GVk4QD-bAכnV>jnjv2X Ӧ{m1ODbw-35w{2l͕7~\1>7--3?(Q])ƙYH5l]G Jn,)MS?KRf ^&Π9-WGWdWp|j 瓅G]rz*n<_JpJ&fZ>罥Behu.>JEXŹ`TV8R SfI@jG:mDyנp6MygCM-`bАչYbWܛ5m<c4!Y{Tu7k;:m|C(-1q\:|ЖVq[3,852¦07əԱ۶kU`!v]"m6+8 J~.ՒtЏ]n@WD%t{ESڍ3uۏixGȭiӖRZ^ܶ6Z i+i@} P Gzvbm0v6ܐ\qT[7]3-R ]mWmE@ZjKWDi0]mlj/y{z"x*7,ip+8 ;]j]M3 uRPPg"z%LZMQ[%Q4pj[ AUT<~(pkEOF7,oEo*u0 ;UT|MAMJ霉,eBxr4UuCߢsh`Y0Jtԙe:/W10i-y BE@7T8aєjpڎ"T}, wn^zdwZQZT…'lo ):NHv"@2JL.@ҝ?OnCmMu3lLN9e[hhnvZ^A)IDLNUMp؏q)жAz£&q!05Oh~Ke 4F4cYfR~2z#4~~qaJ4{7AZfЛd9"rsFC1J&͡ü KY&׸*`&_f?'C.'%dW{WF 1/,Ыl0^40!)qLj-/odx%(RXg;#Ygnn?Wדv> "/w:ŵ&nI6V$n57#hQX>"k`btG67)UF6:V׍UKΨFLS) K T}7+wJ"szF;IGXtaW@&gRY?y k?O/O/2sy_~ Οau>u]P]:^ЁGM/7=F޴in.M6GL.6cS];0qQ(;°Q',:Vn]eN~ j~S{%Qfh{^M"ֽ 7^B `uq%R|>C44D;JD4Y&/J)H~+'dh6:JqyU>?1lԁROm|̍LҋkIN SHY'LpRL*D8n촲3JZ⧧Zcf )p-w5hjsc}<_ObAd`oݯM\bm݄Xk2/]6DQ58^h 9:erʄ +[sz|_p:/UztoZWV+jC$MmF{߾6O0Q~1% K$7,8t~.S ۝4LyPb`X<|>f[kY|7aI8iֆyC|LKI 1PRrR!ehuƮMJ[WK_ B.8oe3L(]&7W?a\/-˓DIfK|,YϘ&;f3-'[ՁG<2$6:AI U.*- KC$ǔ!NOPE#]߳pɤ{;u=Beg)RT أcKeb"4z+b`٘Lz)xBЪs4ELF'JclT:1 Ĝp3'PřW FqR0t`BI?7L[+@#S$`2R-L^>[8uK*'7\&oYk^H,* @00V;:vD:6LD~еWxhk}u0Á tlog?5 WB4k'ΘJ"+\啐RkKR\,p^ |Ȟwbc<`0/7zi4'\] <:gX*Qݨ@9n %QG3I omTDKE+=;o|QֺY-/HGfBavg\t7D[s ;nK&5Kd[#à#(YϨr+*z}y$Z:1f_yYE2 Ta ֏e!:]6Fr rq.]֥H6 J Ske潆L+MxfBeA;qj8jG)7 ]6 {z0"a&n-ڋ F4wW%V NbeUq2Y&D6 "r ooa7~%9{n"zGG`F6RTTB=,>@Bʞѻ*ۈ'uNPBU6PeR(8=]|](o#.![.U+5k N"NeA+uf,ޢ2Y[lپ +t`>Uڰ:2؞aa>zm=V | >b bZ6牺w{XrAgff̠]&PFu[gOk&9Y`W'Ke<6.~;sl|;uL~,6 ćz=F=XHfV7}Pr=:],p_z'"{I4Tj^D_*a\^ alr'g >2r.g͡{ͧm@Cc: 84exiC:$Dt`ὃwEnS/7|6Lۺ][8HRU$*TI% ])̜,"!_U %w<}B Ķ[˗sw`؇;c:y$#A:SuqM9iwntkgvhaeS9  | ޠP-2W!FS&FF\fZs&A8S1OR8NPXzS <*Ȳwק+opX!cTZ{T6 lTVS*eQ3Wh&*&*sɳF ]WV,TWVI^ޱNԲ0]WfmzIm6wNɖmk\*AWcZB= a&!H`FG@x~+X_št$!{TnݯM=w1&j[3A;@3 ݔMĤ)w$TVdi*-҉6 碯f64w79 P I w?w{>r kƒr'8l^6,#${a5,-ȲgőeYeYeYeYeYeY߯Oq,Ȳ,Ȳ,Ȳ\Ȳ,Ȳ,,Ȳ,Ȳ,ȲrAeyeYeYeY5HRőe!l#6l#6l#6l9,5Y!6l#6l#6l#z 2,xYeYeYeYb1%!F@4G @^ m38Uu1Weu9&co#JA}ts>?.L) ,vb ?zeX4͐0^ A 2;^]^0n;y8 #f?):;qw0/!|_O`bRp8!T"9^y\(a~KVF0pEVF9D !DiYSeC"QPN6#L;a&G[O7y}e~^m#lwh*Ac٤RXMeP2k"Yhyj:TVo͢M}kMT{m `{(cߢgv όsyn a<$ȼfVR9>lDP/4h#UNgI2ςRI#9rshk^x'Qq'M&e&(99Eʄ i)*:S'b*mmQi< 5‰l](M!).h{Gѹ~Rv$^;\!wP2ϥ b1NHv"@2J.Q"~@i= 6fٻ6$U@FGˀqظ$X!S"r"/_)hq(r,͌zU]UUVj!qơF)z+`QJR@QQj(R.fgȸc:jƜxJjo$ WG3Z0!V|uV'znvΩ@w5C[ GaV3)"$ɦS~(+ðK^=a_?g\wʹj#O"Z &M{ Ω` F 儝Sp]|8E"kr0s1[^s;epu{ir־oNr[UK0Zĩ >}3+ zN[ƽu,Jo|~p_?]e5Z Fu$H3 F܁{OۿZ^kռYO{eS^笮ڶZ8=dʹx>ʧhsQq;;]yV j~]GkڦTibTP/!}9Njk16H@ѣE=y{t b`Rs&*7hTrȉw4jDD*R;51:A &J/Ӊ%ZN.2E5VpR2Hx9i`g]U׈blhL&l6-;᫬c{s@VSIZKY+u,撲keYWw.K2dR&ٰ׉zi|wn.JRS]OtA KH!Uq4Ub9̩K̳*/X'_Q̔NF Jѫ %&2` ܙ!ND$4BcB)n=:5m 䝓.T|sX'l[,@Ω4\6)x\K&jLpUE=:z(׺2ΛdC QTйR!8=#`KA]`/6,f,#n=FǶ;[ŝ;]Pٛzr͗O/:{CErY#4=R9XV[\.it*Ř^ۄ\cN\(;;YQ7.]1e $J惲ʆq,`;uq\>2]7y. x*j|@B qQQ%a"%c\vwf6 .qM I}(xA[Ӯ F{EpRt}xaIiֆy텙bLQ4NScp@ :g(@,J@mBg[#Tb̍?{}Nf ʰ\C%k2>9~/h.D<b\pJ'Sh߄5c Y n5cC8vɎj1AxzNc85qߑys6RBr uH2ԭBd3$R*&Ft]_94|57 ͨ뙿ܭ5OfFs.p'!(fJ-mc"^&ր># KB*MEcJ]YϣBe x"^XģgmhKѱcHnjHF$AKY\p^CQRţY[E4&tlͦeѴiEi'a{P1zڪ9oy>>d0͵^oTy<@dF|5G-^ )RR6ԃ2ZDt$t}^Ԩ *F?w_?5%۳E,;N%;Ы$t/MBnqؖd%3?^aR(6ο҉qo;?>v^_~l4[|o^nUroO/fU#wVr\+#8XNH< Ƥ%NtZteJ s@| [vۤq/4s$"x.d vf#ȧkɤ*q6ظW` l\)qu+qE l\+q6ظW` l\+q6ظW` l\+q6ظR*TL~eW0=ƍ+=$;g1>}1Cx)Z҂W:丂W r\A+q98aZ Z1%pc86qہi0uSwho;LhT3w`r;5=.w9G{ne\W ]+wʠ@WWցbe ]ѱchZѴiw'R Cӗ}; b: !&*&@UX\53ujGީ2eT2]rGHRQ$ >n^o3(<٬JhWw^5rʊ*c0 2¯┕z]!W:;Ҙf0bN98MHIEk5yrFZ5 "w$с$ `*`,kxS1rv c*Mx=NU`fj_8z=KЀS$(]yjU&1F&ũ+}y%/.-tҋIz>zڪzNLCO,C7Wn3'r#i cTW}CQ?9|~z7zjᑝΞwn37 VFn?&yÃ-F^xr JP\Xd`^%ɢVD?×%={i`"w-lw$gVYWo[0w2W6Rȥܥ5:U@(U( +l.NfCYZcehͳ[v4E]TT _ B&4~<,_8:wGȏ:7 ?6^ N3`{'j_rA܍\O_޿݉ZrZľu[Al*{t>t#t1]{Y4ŹH*JE(DDN KAQ c:GitNAYo!tyt*Dln&iD.< 8o6`oi&A\;+1.a<>]9qv(!ځB4yf?3N?{7鍡W(mrxSoBkHF޷Ђ=g/V`B!RR 8@VJH-t}7*B1ԔI§-xhrfAի˱~ē ^Ȯ׻E[3j&DC݅ cA{2)b%pX*Ǽtd$Bɀ.ЏC&eRrqC./lTe6V6$ ץ?okw1Ҥ }D\kLWK,gÝ3`h=0"ALjMh0 &eٻ6,+¬+[{YtmEBܮtԋE<.Yl?ƾ~k[ 2FF/b/{ޯv{kG#ٓ-?{˵r^ ngL)c=b[1:;?*>r}=bXfu:W롂煇X!r1ߏ )Ԩ7) C6Z*2[Ӭf5~뿼v'_럿)Ga޷eJ' ܞwwrVRw"fK[rH!絥dQM%Hjc$TZ0i-bL[ŔlJ"9Ql̏w;[qF݃r rʓivZ:Mu|ceΗZ3&u)0YLuL-$\KcM( 1M)+d06:eZI@t$)SՇSºl;L8LG}ŒjeYZu"m]w&M;J0U0[>`ާ\1ш*ciͻCs=[_ _[w .Xx!@Y?o4zw>fѸӖtH!dHXl_s\ֆYd,U%X3F"ωR,mtY$ׂLQ 3a¡QG%satc5H ߉2VM`riu~1J!uĄ,✵vi)#8DD/ 2%֦R N*(A5$Cb9kOeM*\ CliWBFL.9 /6Yc `J!B QoTTdC0ѡ^& D2UszC62Hc%'  NV` Bz`m^jvVy። H"Z P|dKJGh y27p,jBӜ%d :24.T+TRTmE^g V4Jy60EiZTRZJ P74Tkj]#(r3ߋil8!oVor3E ] * +V#IPeGf P{l`Ǖ$HcGt6i뱈^P6<`, kNq2M!,x@C9wEC Ä>TK$*TzcT` bX@c+XCeҡI  m'+Tpo*>b8%a7k,X \Б%ᐼ 0|"AO):͒]لRh }E]%I/k.6-iD^>թJ_JAYh8PZ46je| $z aΡ ~,|:omy 3E_\ZfRY⤁:cHQBu"whpD9! Ⱦ 3wg`)~-'2yoPAӶdD="w!4%MY ǀ0^-RP8|LJ[0JF%H-GKg00w8xbgQܼƁ|PcAZ=%E@iA2Q (}p|RT+W0o,9<{*ȋEK*z@:G̀ 8 V"Y ԏD?^_EQI9m,9xY'O7VCpuoΣN*q>e[=J \{#Bzt)]L߃b( 7D. .B @9Bi5f HcJɡU$\`tBtXDK&VȀ0'(YƮݼ6 34I̔HRdJ!2~\ w`Dd*p(X00l" wU&PB@Ɩt")5Z/tc+CGY;5V,e36HT 5i"*^ `NYc- $l@0U}_^^%0 R FhӼ󠭽t|TXi/OMεD@E#!nnf= kppS:->|w9Y:T k̪?k>Hr9%E. `L]Wݳaҷ 3B*9)1j>#F@bvzhPX ]gT*dE2&T U ))7 !kVNM6djV"b%*cIO Y"GooP.z*=ԟP(iQ0RSd:  3_ETi,WQth#ʅ0AT80v*`c#Ҁc z@JQZ^lIW'#)ɐ+&Us'kuNΏu~o)T__) AP  ƨ:8F \gX@+EXAIP E=ʨs!|~)Q2`p r}vXb*5Z.MD}N1 4&J9"0|ңhI5.ΛY~ZOW4<􎊷!zO(7BMWڒL@&MF!1?ڦqv=|[.-݊8B wtskjꀵ,/~T|TOyT7qaz5QFBS?# pM}qVDء8C=>`#|1'<1 -0bWу{'gZ|uv϶7gޜ=:|U\Ӳ,)-kgGW+i{ZtttegOzgӿqg߻ u 8ͽo\=Vz?^/uzx*mW6fQhW'q[=XmKYv,mgi;KYv,mgi;KYv,mgi;KYv,mgi;KYv,mgi;KYv,mgiSJ5 5oHڎ.x3v+}+n.mVAھ%Ҋae Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,.BsOv8ߝY9y_i|!gO^X?QzB=H6L|>Cwd'8Sowf{U(/NsB^y- 5);Djxts5 پ9)& I-{w7b΃^؜v"gmu蹛kN[='o>zyrxc@svϖ7y%#^~2KW3ꤼ_o'8/Pm@;gL_,5٦,>8֚Jw;$xGݫ|nyd0̃aAwN܁oH̗7AJ aZvJ\LC 4ԊJ %h'3z9zo i.ɘWmzv}0S>Nkj?j1r1-vqX*QV( Dܴlt/SWUJH:t#$ )0sSp|^z oBiQ2 uٻƍ$c|؛C;ZQ9~$%˲(6e6Ɍ-fU\X ij4olbòV{}Ǟ@uT(Z`P*pQRnXt( i%5ro@KB|TQOq:{J}% =v荜Z6 gɨ8tinGn2RƓr6zz *}|uX OѬÜc@fwLhSOjNX%mu&r+i]y.fn:z w1-`tOza9rǤ1[;GvjVF vv>ng+#l22j 5|߁Lcbo.ښsNa]x911/\ޚ|XvNA, &Qg4fyj{NW'~Aoa;ȭ\鐺znCu}󖴸Ӷt:\ߝICK89#N<0UD;d}y#?j;>ϖgKzlڣ:Or40*Jx[J(':IHR2P)Khlأ9E6t2{4R>NGuX鍜&1!sXIuP*q(D%rmJ 0S=7 h? a;r4[t\T>WcmmtL3ɍ6 j R&|g/0 b##Б4`06 nrfwp m Y[)%4d~Ž}QVipǩ.qjE$#ef"<"L5)]:/b jH0Q/32+fDL(cQIϓu4EAÁ>$t~OBS{ -ec_zٞ֕Lb6X9_+U8+.{06)zr͙IwTH 6䆿pqt&p2C݌UQ '8ݹYոq>%XakfK&V0{aj6rI>+YZ&_V{m0,iA{-Fg3jG ^3oh~ϐ^{ ҥhz>mIPoԇ-mN/F`Ҝ6L[Z[gơy.ni{mAz%aCi,My Z0bwO;E~*[~l_Tbe;.+{@dy8`yB[puz`` xaPl{ǘxbjaګT.R6Gv7Dmq ͠S IʭIN #׈S2y:h3M,L \xgŽ==w#.`:]7۬3{&闲nYN]U:tYNc|763fMRcs۱sbGNҊNDa Sf:4N+v5CXQ?8!6pt'fÇtYΪt@ X6"XC0u̧#QzT5~otKLp0>3UrTŸ51hiN37']N5 ~5glo85^Vur^JWaΉH jΉHJZΉH?'$pN;<'9GIA G0r"X.[]bgSJFP0vY4a#A2?9HysQ 98>NأjmN= S#['_CvISJؐ'>y(9K Hp*g$s*߶W_E1)ZD6ug/ˇO0K͗\2ԊXOoK<ȓ1M`R.<5飳w:]rJi"<M0&;mQikV\5XY;l7]kF}<2kUց,-1}pq2H9~-_&i~v*+u ^F|v[Nt&??/ϡQ|5Uqլ4KFtpbWHC*g&+M6)p+O'S5h8aP_OP1ӫq%S5>T9jYM.'|1o%ʥF'n6\ z<C>~$sgS,3)Y_f~ɼɽmDO\9` `hn)1H)h9uzʍսORQ{bNlt;E8`}$Ɛ h,"{g∆ī^@&&G}>r))5Kv&Lֻ/U?x+D:[Pʈ1͌uTJZbD+-# ~*,fo/m|ݹ;m2[r W\b_vJY%!ZrhDb%oQNa[| Q▱g=8:<y,![nu '1beP2֫ߞ^Δ IUp;oB!ghթ0#Dz4‚S;SL=7)ޝ]'@dp'tl!3]]?ͰfVm~0d)0Ի48?CO>&\N?}AMPAez`+THcNO87&⸱hGԜ2CFN0 琴QLgE4 kpo=LΣ;SUɩ=֖Eu{TM+toKss5?owi|O\/'Y Dq)uRɽjȕqϴ6q,!D`1VYE 6 ;5:ڀmT 5i!3{#g77Yo)}#+j5AOQ9[).}Ԗx93CÿU>,5_pKD} /Ri-]Sy'7Q9d@%7 j-$* %^Fґ0сtR!A Z !,r!c6(Sw4H rt->0G)O T3qo#R-c'ُu=f=5b̳k,Om.صo—f˕|EꝁOtďΚeIe}U̔eʭSP2ĬLדm`Qu`QCViN;-dc*|S/$z Z MX)(SsײЛVc2kԃsO?5pՓBДPD6iE%RtFX/ayIzT@ h.w!9~]rcB =x%-N9nn+PLkx#3퉿K9?'ҷǔR+g%G{V1jIZ> NLs=#c 6@–{8 )ad1h@ '5+ (WKM)sDJe)A@88wDX/5^#z 썜K>?B_Q2DZaڪ˻HfE[jKHKw<9;my-*_]N٥>HW! io0(OO\3Ffk-u" t$:|U2֬er"R")9P*"+ J3K@D0 (OLn{=CBxI?x<X;6 8H,6g[ѕT0nh 켐9JpѸapj$K> 9GV[00kERqXk"pI 2%3?_ =ڂt܂%" @>kh$zCtރ ڂp޽bK[q&viՈ 99pfFc_Z$_Kϋ䦪V1HlZ.Ҡr "ۏՙ]??\Wv*~z@`ly/ٚ36e^CNg "LəbNLYK;=) /OX͵ſJ`pVt^gMHE\:xa3J $‚oU@q[Pԑk,,W/o _lj*AҬߚt0gOAKF-oz4ʹrhg{8W!Ov!Xx` ЧŘ"H*o /QrD6$vWu|sx97zn΃1MTo T=yW>xs;|v8{LLYmF̵^U5Tv!P[v\}{ek8B'Wt] hq|F?(e`ŤtG> V+A{׺ \%Ly$uac0tQH' q_Πb=PSRkeNHҲ#e2 hùQRsǀh{`gƷ7!۞*S;*̶;'uv\_tBگZ`q,bZ Saj1L-=(n1LmH0vbZ Saj1L-0bZ Saj1L-0bZ Saj1L-H&%ˏa!w\pSwmeE'ɿ4'uj76Pm&Cz8MLu`#9o O=cF@1gz4‚S iLVr1^-W3/RR  o5; >&ȫNaK]VuK_/O]ƦI,ha]JTWZBCWX yy-`!V[Ƣ[IܴwSOj&g^ H4H```v؉+wCYYFO>>ygFn#LEJ>\ aTvv#W} a_jnT. c2(Pz-RV*3mn9,'|PZEA2P_N*mIMS#͕0%EN㩤|\o.kjbRN;.i)?tywib44}9b>+rs?H-s3_goI6vO'+ro)ocu'_XkssNJ\)|d R BVpr=9\0sozqav||-uFvNSM)Rŕfj7!ԁx>AXQA)3ZXdQj"?qkפAG[2bkURx)*WQJY77S7 ưJ0蚲 ;;?[f,iD|Z:pWkp K||n0P M_JEݼ˵;>2rHY%&M6G0RiEJ@+ǻP]*JFOK4,bق/Ch.L~Nlpj܊dr3Zޱ"Y] u=SMȣnhYE*uӥ'u[~FWۺT6%k^ ҭ#Gfi% =Wr6 凷/5ʤ$+xmUjfΗ>&\՟Ff5%T0q@:T^8<=tW'nK޳-i6QFP ; EG9! +ԹpuR&11K%Jp{)} 9%6DȬ4*kdREm6*GT KSELfFf^ig>N`<{lkWOLuOQb-8`j)82gyۃ?Cczw'񨐣:Cq[`,YY ?CE;nu\+Kl ca6pgb!-nB b mj~f6)95Z ӵ?99ޥI AHRVa S띱Vc&ye4zl"hnDqu6 /GkQ:qJH1rK%bdF/cLrh[j]3Ʒ)~vaڜvqt?rՓ_P^opƭ (eU420D%$@tY$=R:":МN2`}o@e_E.qVqe r]Rwd#6yGZ{A<FHHj($^2#Yow",iP΁9KFb0Մ[Nk9}rڳ=0,RD5yF]b{ 8B#D3\^Cw?u@-F3%* m3}Z=cøfKUG f)aBpVւ[E'z QRauLI D `<q/ QEI<-J:29OBE#KO֗.LDnhz7/0'r-iBBōI~S\ҕ͗.ǟ.7mhtGkf YxofM/on|4hy!] ootF's'Z7 \aak?ζtW\,S<|q+rɷa,=6xP"؁lY̲m ,X(wڤճIdy6etR 86|;u'׏SMHintj!_+s32}ogvnl:̺v2J!1~ p =VijKI@H-WSaU=;_wF774jz6'/pGJ_QYbw@?\+w?l$ۣCMXo;|zwg4hXFY; ZDl ha[aFhNP |X>[|>/=kxICt ɂ' x̢@G4;=\;9!\;Ʉk?N;i`T2%PN`8!IЖT D-Q0Jp+6^זir{(#-Wxw$*tZjJ0y ap) <@j 2Rʃ6Ggy%9 <6ANS)V 6` @AZ–Jb Zoʔ &tCb*B,iU RNK"Rk, !j;X͒ o8Hq ĄluQ$upWQ. Hyꁤ XesEN@:T,4_&G4s(;&Cnt~8]lpZѶ+ckWS{Sǣ€b,9.x sZD2g-@~5,ӭII$eCD`&Q@"E- %R[krc;aH!"YM"9&r6ȹMXGp"1rbEbH2Dl-KvHk^`g4Ȩ|_=bCgGV+iȆ  uO"X/D~4j>^iTr|Zxml[/=Jyj[\wkrvȾ2~j47 kKDIŌcɜ@ADymJzٞ[ =rx O mHq5Qۏx8= 癝y$S1D29O}`py1 )of5u7gޓ6r$+_fjy0~{1/c42##ܦ(Yl_ER$^-[TFXUʌJ}~SЯ3'ZYPo>>O;}z͜ǖY8>_<>BΎ[LDR& (ɓb# )a&FT0,$*Ϭ'680IlைO:#jb> aݩh%X+0uQC}A-aЧDd4DEYLI)S$-ң֢#W|ѓ&G+BR+Tr6Cyl)ihe&XlZacA"Ƥ@8%Rւ"6y SFf!XD(,F0hQ(,C8AA[@ȩnElSdY^?39#غ U^#FL[&{܎b[)kh wx>_SP!,Pw7dVE5ͿbrDp(V?A(6eQ2MyZl(b D{MFq/βfպ38PO79/i|N qi#}>.I)6'N&JC =7#%lh.Ju;(}O9 =s*9wԏGʋSĆY-=Xi^Ǽz1) MNsPIGg@1kGU*L0-R"(^YX>"BCQV}_/6!Ywyn}Fô:Fx`VxnsOó y "ߊڭ^Td>,꯯`xYO7=%HE[e CvNʚJtIZ2d@j-6a= ?ndrWkIؼL%f}ƿ5]5䓞..[fO/G,B VO_(6;~4Cw^]*JY"ky ;]J.7I<1qG.ӭyFE8#dJyܳg>Ɂa:Ns\4|n^OýIl̜?x`jY# }je1~&Fkd&~.Q(nh!K|6t*- DI}=WBf3{w*-jGG9N:<|p<n[O0$,$cb–N$rT mdZ!C& |鴰`-1 [4ڳ}-[եƃVPbU#$]\ݞ4gUoj󪣮d)Բ}TK0RuxuNvw,PDy6!TKFr%"}˭/7] F' sY iZ^ai=RV^m~ԁEŀfxa}uB9ɽ|Zf(:D*%zX$Isq]sIb2&QG#kܡ.;|[yK{&ӟ&63ZAbB14hTWV>*YziS'Nwir7 v{>0oV跽}2:8wB=dzLUp J cBԲ1vzLrDJU>{ac`ء>&8o֓Ub u`2_L؁!ڧ9c,c'9A,Z$S#5ˬR(B&7yFW\ȨZ*66]'#TJd\(?& L'MYr:;gǦ8wx~49}qTHذhmgkInkNuqWoQͫ5[ۗ^d֮ZzCoX-'D=RA"(gs2JhiebѤ$G7:5C]X_|(fTΒrrL^B I"(2!զR/v}.=UݭNYD&&K ٺhQf뭉)_'cK])!hHBbMJVE!KJfqdC`2R~ n$³uGAP*9WA ~׹ed(sf|u~2f #\;?/{\;- Ã9H7QZS5W#rNﭯ ]s>ET0b_Wdٟ\wZWWa-xxX1YȂwҳ1P}B1 BΝZj|'Hw)3w4 oƬL/E&㐿nZ w3k n63>P )/w˘)`LOt;-#7OZNgӋK%;)rW vwvMް"?ܫR9 w?==^"ukvƷ7ӏ `}p\F󏳳lEjomq|w#BfNQ1_hY~bB4ly`=*/ǣAO^ ^]ؼ%]0LjH4R76>ƺ̈́Wޙ|'>V]:ZƉ-6npx/?߿?}~O?|R/|Sš_`[;ּbkauo=?]]}W?;ֿ ~qn8p(k5F~/5g4F8Wl&Ylh8|=ʋp^䖅x! _W+>16HKG;uJAP2lQV"* #WՖ$!$ὡԹa|[^{C$ێz$,ݷV(Gu<]0]R6d3x%Q R@ u.<ؘ糇S;Ӎ]g`ݢENzx381KwKWP>Fv&t@\d!S%! 4JTrR0iJ)KAʚq(#1W 2 %LţtN)VڷE옂;g;Գ^__^N˺xݽ6d547eB΋{OoCWSP=t6T(6 {G͈okb!GCf3i\tBmI X@M5 YG2i׶pȓ@&>%\BvdF d1wH|}HW|(95Nn'8e>M. 'W` a5bVB3}jU9-$ ٩]aKLP` wa=rפuAaX1y#!VPBM)@Hy%Pm"bdlA {iWDcX ajMP:;PR dYWttuAck=S!))DqDz-6tѲz2}8P,ko{VDB{^ki3b3_l˙ Fp (MȜ U@^h=&W5jJuT5z M$sdq2ibGʚ레 |BկfCm6<A AJC6]H$f)J&^|IxFeo"e%. #kW!/*RZYa96RPRSx_2L҉|!FV*1Qy40M")rT@xKڑRQ(Q| !ʤˑ-Jk5ґ7\>V鍰H"Yбlږ2䕫Ӕ}K"Q^yWb,,s,&KS0P>qIcH`#bkB)E" etKihfm{h sut3w;攊yN+'S[Ty+.=޿Zżo L1|Z:^h &DagI{H+~Idv C O1Er9mW=)qxYmivWTU]6(SX8S7+&90R+Nشؐ\ ;^Yr\୵j2m2w&eNP`Ч, hrƺ=~׀H%7iQh 5(¿V8d"!"8H$N-D8x,THm%^'];=f;$46+BD(Pʍ"6bEw9lV EbD h$ D]$ZBA-mxQfFcf7X%Eݷh$sч#jĜYJ!"R6e1's .(&:%V &%Q"l 6`5 $ |*$CF;38{U8Z4is9D8fL=wd9A+SBEIӬTy`_W/FWOq-lݻ4 ad#yOx<}{FnƇԑ~QH=;UY"[&ԟ09r]otӡvvH9zG|Y2Q+-g a_QY]OYWk1KJu1f JMr 1 #4kеoę&\ąL_𲂩Rj6œ7T?QQ\QqڹΆ uMЩ`*6‭iU"bK(F[ :aFhNP&*.Mlf[kit? {Y6->ެWd:{Կ|J}m6Wgued(dLs*B;݃#rғTL2*9O-+FA%"@ F/HY j Ϣ!Q~XBWg{j.D z5kaQ'ubXj\&a&Bkn+nCO ߧuƥ  7+>]5׽WSWӒ!*;-~r(49y$KXU5g _Ԗkt)EbUETY"c^FхC*>&"(QF^3DsED+˙fH=OpbDz0`Q/3!&3"Z%sMq:*yPrH##%}IQY!,O9ƻrp@빽nY"DL$X_QY.eC^"ĐEqo{ VBBuڽ}q/Ze(D`F' X*XΥ{tpA$rWl_~ָXNJ̞|4)[h8U:_:Cl!_/0YFcD2Y+냉5eDDL@0h#2&"D8Ϻu]ԦؚɀqQ>O[nK^%ԮCngvcN~2dHE*L<:Dc)Q"!@%K9 Q ƑFFnJ \? Ry4*҂!J6lf?Ŵxr貒3eL0G+u=3ae81Qlli勩wvU Ԧ@6t6MO=L2e.wI\n9*W8^q]7?S7_H8loĮF|ƛuo"^!RW]A)#:43Q)9gZbD+-# q*,&o#;Q6o*7ƕ8t~ >l,IK֘5m6ǜ(J)[sLk1$hwLPr1/wE*հ+s~-*A+ťR]}7J bX}vv]Bj?h):J.]=ؕٯ^+*(+ӻs֟뻟jl]a$jF~bO4YvCL-r-seη92[fIhH^ :ո\ :Aȥf tAc`yp/GJ.Ǭ{tAܤZ\Ypv H ,|^N -qDSܦt>d&#QHYu2LwZ D佖豉hj4BZ"p _F[/m`E(VZQr""jRB$9IJoP'5A8@Qc|0a+UaaR"o6v -Օ;L&R6'^7>:V2_&JtX;֎LUMr5Nۧϯq\y{8N4H5.` GB>=͜^iDdtFB})nP6&.ՖVNq飶ěU`̙挍3nR ÌB0/uլ!j~iiҍevw`T¤O$q4* %^FĐ2>":sTMd+ƖV2d&(,r!ch@L> 'H c}9Р4 {b4i;1,h8h: DH[ʸKs &9Y4VȔ)-S1))DCmŢjUWNvTcWSCTD;Saȟ% o-]FpeF^U0%BEo9,E}bD+/9F1 :8`vqW|[G8}h&NYD;H+)pcsőTLiad1h8 u !ap) <fViǬh)A@p[/5^#v 5Ύ-3<`QIpɗ 5frmI^"wSײ\.ٔ8bNz"a0P7sc',c 4. @uPh[hDX|uPaЌÙfHFǤ"9 {aQiƝVc  B' }r{BE/=VjsTUw|5wXu Fb.KV)Rq i{7Nj Qf? =TW.dqG#8j  H4k Z,D5󙐚%q Rj,6!6CX`P ~sI`pG=ColȇV΄hq!7s[?e:]?6I>6|"VKb$?lj ) JPU2}Rٞ"П xb\u<؇֊2_eb7gS`)"7tӖRLݍ)po7Mbl(YR-^wXBp 2^U cav:eZ~ d6D^ݎg~;7nQtu[Lk*g[?ஸKǹ(5.~)?4OfF2s7S`8b̽n}7>[g`(T0ܙp#ChJK.]+-CZD@>Ln~8=rf3ڴ9jW%hM֭ZW82y”FKS?b+Yo nn'|,MpO'O\gp}ݛ?K߿D}|7n? K$@`y,ӛv_z\aT]Mzr໬+rͺkG͏zRwyvVDo__NR4 $Wdz_ǩҝ5MϬ*T 9TW,D 'dJѧfI^}ǣ">fM,4(gЁS!:Luk, 'Hȅ~ L1"fZU|&Z d'K9)M*ւL,zz<+{ެ~0G{??oS7eyR 8!ҤUxC'Lߌek:ˤi "H U;k5'Xe`ۣfRDFZ{sZ䐄@$iEG-24\zeb"dcB]&r f^YUi]I8U`ٔfn7;lJ}h&{U]?o9pH}O¥i>8{4'$]V+Y*{FknI'Q` 95E+sbq~{p7qv}VD8fwt=߅3バ[Eަ q@h{׵k?ĠTSkF篭٬i8G.ŹuH-\}CݝJ7<{QXSy;:mk[Ss&=-ami^o8[DyhowpҐ'ݻI]^i%1{pYgiE5)' H?{nGF2N -iI~3R#Ika]NJR#ߑ^tYH-g@Rճn^g}e?4qtK[ u[rAΛmgɇt?fYt!;nmvKݧf2GŋZ=gLF }ol,2#>+QǔFh6~,EOn^UR9\OzūˆÔ?yhd7\oTE;L I2Y:~2$'*b{^DzG6ldB IrfeSpG2i_FFVOZī l mζ:{nD5Ɍ(EVQ~6JC^ V&h?JZL@B,' %m>>xќOb>:󦷕C@4orYȃ)Q|2(@ZZҟ$kī7AY.6eV$xBФtCFw%B!bP4y")59ļ:Z9_;btP2 ,yG~OC,Y5vs{qQG<:5{r~h~fDFp6? % E O~I9)i[./uQ.l 7V/m__uɛϛg^kԓf^ٟ޵¬e zKNevp(y*AtNIC5Qp Θ,Qd  >s+Sarۋ݆UnU>ly 3j O}9z|F3tȲ4t^X8RlMP:iÔwjԭtP?W[AB6m[qU獞vA7U#UO#ҽ+,1;զAFtz`zPoq}q(޺qu?b'W_*9ox`o$h*9{T]ZvFmp?XRi2-TǛxg%E?r=\]ox wÉᔷ㤁oeFWwZgd[8<\etCҨ1RBҾ4?sbq LH|eHsGkѝ #O> mkM_z`ROCPk#;<}:m,и 7#Ӣn:u IӑB?&: ð G fRDFZisH ڮ֬C$iEG-2 )MRBGs>ЄND&8@Ao[X7S[V]aiݷ*}ĺo++usK}>}qE+-χwws~;:Hfh%+O"rj.tG6k[߮DaaҲPJ<+QKmnwp-3z兙8:U'uvYĴՏu-{#@Fʎ0b2}?tײӺ;^drpAJ + EQA0%@hLS#=b(OD" @aIsETcɑQG*fЭ}2 [yuwvVPLcTş0La:y]ϭxCG$}Wq}+GUZ~]f8G-ۗP|KVv@8F;e t$geH6[݇IU{j.+&=Ie#l?EfrAes(22tWCG6Elj&'^F-x66Vg[j^0gO"hl&7gnLk*og[9\_YJrI(Rm8YrTo-^dosʓI%@DLp&K]x'$ɀ읊N6[Hxi1g!%GEK0A)-fIb (8ƺL=>B>iꐹ,8FI h.f岔GSeQ#-?IX ֈB`|eXB7!Ebzm<i3Hh_KJ6BŠp٩@zM 6a7꓎)#NƂ9Y'AQy'- j@z(+In Azc*?ȀNkbin<؄ygpzq #1G,x|:Č+-bp\ х=ӚFTd`yN 9༆@@@@(dH2ɋ@%ӯdJt0F/s"H4zB$}d(촤%-D" &ࡱW8{;RuQႷG N_R’ABU݃Mh/eP9?!솶tg7udzoytvGgg럦A9c/ȦW?{7;~m Ȭcyc̪a~w/%anns9V\WKlkpjV:@A*+o(\~KLQU\Jl֋V)e7Jh:KA'W7ތJa.SZhBNj5wJD5[FPAKX\DS#ɿB7R?_!xbF?%)J!I"EE#ștuw=^10!qQ3ٗX|tٚJFR!9!e[ޘ8t5ǡA; *" _Y/P!ut^h-XZ2e\*BmB!dY UTAChKҤWvpA̷o>Jk+>db`t:B<_7Ǿv—$@k %!ij3N mb:2}f6mk|Js> ~YSPK=!G ftM/yU0g/ywovDOe2[l p`YLX-Q0Q>86YiVZ#f=.5݁n֞g:EA - Ҥ]״&ʊڨM_FϤ͈w2ɤZ=a+;ͪJ3ºфvj_j_ܽ8ZRr&s9x'=)tnܸ}Y>I,Jt:d7wWz\i4;!h=VfAQ=3 [ʵcWx=?Z_rӨJ8fY:" g!u)sry*SQ[S!0< KcsK춍V]c97ȴs>j36zr^fl uüPpIro>Yī_;IW?~pslnJuOYɀ"61CʚL,Dk1 Yfv 2-C|LMM4$#\2=f]Αhl]Bg=4c\ n ;ڦamZjM1XŁWRڈ 8F@B@a&]D1*H.8zt|GR-46Ocd0k#*em8 $RH!Z@l`7ZJk)mvXSj\fQkmpur1X(m+(XI[g eb>/*s*YZ+=K䳩 m7V9-"qp>c!z*Yo++GөPz4FWT/PUGJ}KG6]u6Smmb (=shkEVX!Һ^+ey}LP2ƃJ*f,X& DQfWL(,IbL EcRN(3ddɱ gp{,/d Li:Mjk#uP-0TzԷAV:Ea4 c"![*:k87DPKU(^_.({.=E{}kOwmxĺQ}oIx9f 0;Xˬփ HǤ#Ojq*ٴc/%6! |t){X@UJ)obsGY$'B9"3H-!5KHD tkU -<&{AP‡1oE #OTBBcVuW~쉋8\:pQ0#Ys-QND.pԹ>Œ$Nk? 1 om9v=N}ԕR Ϗ >vr\?,Yp-, θd솝]-%xzwg13jrojrt/x,YDn\5݃hASO_|*XQs~u~ SwgX8w_3*~76y_͉޾=]ݜ^$gqhḾsOmߒ,=OT9`>v4y3~xEk}\ߎ.'*gQ@=0WK{݋J3Ej_oGuw=-Iqإ#Z>ժaa ""ĄG U,߻.aUw4UGNrը檙,gX82" WNq3޹{Mҙ|RM8˺N\:_>|zޗ?|~z>}{:a,>V c7<wn>aM MM6Z69ɂo2nr+]B>akekM@R~z~;%>n,\;AjU-JWaI?!̯ƾ3"TUY%b;E`չ&2BIٯC 2W,T}m}SUy#mfJP>̣SJ퐎WFyLqD9sY& q#K@mcN3&fTƘp8ͪ>.Bt77mbL8&t5:FSS#wo\2BQdrFAUⱽB'6Y3MMl6t7wevK󭊦;Iǡ${%mSٽ|zxuݯ.DY$(%6W]&=ǐŷ 64ߔ7{!%iSdLeڳX/2lBXsDQ ) P*fY2'A̅Cmb:2}fabcl)+k_5C}kdܟ Lx-Ej)C(}r*UqPM`Q0F;fC,~鎁(Ikl)x9sK4IcNevYB%M&yux,t>4^Rޤ1woWN$RAP w* TNFz\a P5&cji n~6ZCwv^pI2r@*eCb(7+ȕVLU%?yh'3ۇgxwBB <J #DBu9Bxc~1$G!YLjpϳ h@Ч#6cLR:9I(<R7z"C:&lrUU`3pGٚN+8"e޼nj.bmĜjpl=WOy3=WЎ_;ތ ٣uAl@%JFyNWZI#kQL?&\RKDT"drrc"AMtU"CRG9dk7X).Td9e2 +vΐ5d,%}, /6LfѰJ8UE. D ғ7ޞбޗ'pp -hlU> t6e Ps2f[Ej5GTVtIZ5Mh=4݃ up%UgzI_!b<"a^cXxfF)JCRݣ;xX$[JT3:X/ȆHB@&LJFm6"I: G* la-!EEf)9̉ީM2FΞ=R[g{mYb1bz]J:~ƶ"^s >uuCtf>zIWcv4}NH#lhۖwO1PNltϷ$3_̈xqׁ =*+\>rՒ7O $܏yMcR|7GiZnm~$2mduۼͼ}77jZ#!EB oѩ) X#H?nOH'g2i. (òFДQ&2 {xP)v TU#e/\F yH\E̢ә!U.9YkäelW#g̕^>%<@_5orJ{[YKX)PV4(gKdPx<m'EKA+ⶲV1ye,mt710ΑE#01+g*qS˾&ϖ1RߍNIƤ`@搵&(Lei="iNH70 ڭ\𬗱 {{6݀Vq7;qfx:O)9*|Q5IMVA8aMf۠'(Fa^aT'hRd|107  j:+KGIӤBrB؟r4t@CR <?%O|ڔkgɄId^!>5!! :ܧ6拳كwz"E}Vbu8܎m~! ? ӽi>(8jFE!>֊ٓ-.`RL˗87nzĔ˔nS^Eb>`66l_}?;ׁIo]@/g4 5㶻[o>ۦ kơoPŨn+U`tY/7ܜpY,T:dDI ) ^JG)!7e䱀ߜw/o˔pXܣEnlhш3)* 7P/3Ҕxd^A/[N+K #˺:?`BB5_D7w픭tQߴֆðeqENAEz-wJZ t+txˀmBSsK+:=0M'P~}wT?|8r?n&/ ۼg.E9ewUt"hLOx3M^8w{F]NͼS*mօG_E_[yxq}㹯;P5#J w OfM 603rHay'Aړ'!AB2Ij6ϑ/1!fqv{LN7p^7=^6Kߣ5䱫2 1E]tY %iujT{`Zt>^b,_+UDhfQ!4Yh@8)% B'>%M^kmIg9=zK8Nm_sRu/ o!ZG} .pi6AU%˄?JB8]_/q<Ѹ~%E?gÿEncs]~n6<:Sn*Ͼ` | Tg+Zb16:?VC#A*e#@* 5''jʕAq5RrbS0=>pFFZۅUGY=ډ]h4yck}֑`HK9ieo¬@n,Ӷ RFs& Nll}YE^d+q%J k BG+c9O)3z)DV)"K%Ϭ=V=N\X.ܦ Qr~uzc2N?Tdm«u"Rp#8!Jcg:p30R2zUuH[Ex{Y<_r޵2Ӹyb7moɾzC" vEr}+" ƜzBZоQi)CO;C̽RcUk.g}Q/2U7!7k BA|WA_u-U'㾃Ww== ~L}2S-mc-X:Ի$uYQ;yd?ҍ{mBji ༱ƅ76={5OA>A$eo@OZ/SYf%,*K,b&o|J#,aܴB滊E}%Gϳ?&[QXŕ4*`$&E γX "`߅*q itk ?jZ͝ L/r&`QjQЀc4 wIzLڠjƽG%nHL7zO5YnIkrAZea ;ːq"\}&-SA t>F[WMN;@ܸ_GE/2sd [IDC6cf"FbMi1BJH2U_f`tsI(姄^Dɍ!ZuP)llV#gEI(k&~R/7/udUUOzulr֯جJhm/L<9kӤp9κLm钵;^+תjnJ8fY:" g!u)> d6a=U ٧Q[S!{sK=2) dso0i\Ϥjkj֌QQta5x.ԕuu Jnb\svz7şhhtu4ξs͍L >e%BX)k2" Kk=h (4HbT I0IJ@66QCАpd2uYCp:GS95v5Tv5x6v`q R*(jNEȂ+ԌqȬ)i2|` q$b 隄,jFMt}I,Ubj4v`9ak/Ox*ֽ({KGcAk=uI[|4Ne1ӻq})]^3{mR,@sB+F:g|ΑiKr"1Xο R]AzN [d!BIb\sJJҎ<%R/l8uXqg+Ã?>QZɚ+akŒ$M2p?AE7⃙1/O].++ceҵ|:&|x~y7Ď_ӟןӧ'.?][o#r+Ƽ$׼o C{p$'Oy Y?bbYeJja_XB>恌au9# *p=+&54=LuZcU*ܳTbCF6 _clwG*EPBd~!39hd1d9B wN0ǼU]uxIikSI͢< \PtN1)R5dS`gﻒ*QmM*rNǫ-N7^\t\ë˳\ܲ $ ,J})!EZ],Rj݇\ÐK!^vW$.كK"-UR3N]R7X,cbઈ+.&H Oh)Rp.(W$bઈ RHuኤtWKg/ .\Up)pU5pU\J`?2n_'0=/\R\ W֞|u c{:sNV*ku1pU5RHt$%\ |> 4nsodYM__~o<-W/WKqO?P*hda9&1ƈ 9fcGfT01g Y҃Y-@~[iQ&LJV0n>կEz?|?xzͷ kIsZ}JCIX{;L ÑPnB| 3F<Q{c++;ֲL'R.\۔0. J[\9<E+Gm</ >JLcq.S@E"YkK%`00E1BA› ]OWi&to/8~͡Z9؍^ox-'*5GY~{y+~\d m- KdMbp9}.iH%K}HKupE)ɃJ5@MPĀ)4B`D]UzNYWu_nqq휅쎩gVx5Fxrh1Ipͽ/Dhe֏v#Ab.o3E nnOOs41b̎:nȆF{?σwO>Oc楑n|wg?_>k~/s6ѧO*ypsdo媷.X,χ[ a4y9.9fN}g3wzno~&0y߼ʹ}677jR,B L6h*AH#8iRIwut!'vI)ض0(h9M%:引{ r.RUtpȭVyx$MNgT]YkKeWg>.9^izV,bSmNISϛij ԗ/HLWWK9wX5|ل^1 b"uAE1KwcVNbDHDNQL@>N,TVIJ@LlaD4vHF;#=pYq[Σ{uYƔeGYJ „W;;dY)sETQ R9JaE 0lm̩ C%8h}KA)꽗fYV!q^lu& 3$uIN޻6佣'\si5 u ULļ2 ٶVT^b!!š}&C>+*h>U':&rrsgȂed6E9 =dR(ZYcұqWtYXz ZT^og;P0o;~V m!bCqm6yk|4d$[9"pb0>D+tHK]V {к+x^Zyd:Fn<ː -P3I$Щ >PyW8{ܮ=uW֠O[硂'N[n!n3=qҐ=y~ϔm8LkǎpoFti/Ҵ̝QDh$ɡQ7 6(h''$$m>;lkΠV2/XqڮrVcsۥD3G`X[LBQN*ًK{i6-!<Cǫ/it@yBmOMnq> O>6+rm_+t158K& =of-ARKfjJ{Ju&J%* $cSH+d+)+ BX04\ K?U*sRljL遃n\pwGR}ܱcNdv9偸`60Ƃ9 75K:KGAU7P$S*| ~@P LbrOnO:AZϹX/G#N1>\ӟֵht3T#B ѤFg&4AD9$X{׽2#NϡkWOb[cCO}b숛gpWjR~K{q,N% Z!em3a tiA$oez{YҦ3VLo.̗p6 |qpMz9<,~z)u7~f zz d#"r$#"UB+/HK7+H+Ȫg7׊,OxXݣMnljш[7KTaξ`E>oCxҗ^n)Uwk4W^1ϧ+ů+ʼnbk>;։v-4L֫)tP2i p^!+YWtz6)Y׈庄r+[=YcuKLmiy;:9L83G;m]̑tǃ3[o3g,Ҭͼ&ʆW=.-&1f,Q)2*q3k.UZ2Eg_ٝ=<޿1~#D?x%L&x}Ѿ!noպXOz݇ڟ)^;x(k_m>*ާ@&˴m y)|9~KNaz;~Uuc)Hᒎ(Q *dbժdZ/xyr^y(}'JgWӆ`oXn{]/X2[kmߜɿ3+%̸I͟D2h0.FJYn0s})]ܚ/aڗ0Yt+p8ƶgZaZImTuqHo_Ok2ۏ%̖~)a.F\f>'MZwD=w+4!,ArKIl鐍)3#-Mi!eb)wݳ, qїsMJ]L*[I+^~E1A 1$+uPaޠ8U4݌۾TdBӸSW¯K=چ!N Zm,yb;+ڙа|io8K)SwF>*Jd養* BjNC,6a=.ǪȨɈ9f'#`黹%F f6g&ZzL U`aq(X{,yY7:ݗ`#67<1scV2$& ,)*(ڃv J&Zfu[nC< & ()4¡1LEckӯ%nw#\r.Vǡm*Qg[xA *@I#ymީ(r2`*iƉƸĬ`aV@21CR5HCXE-Qq$ MطW~ʺW oɾs.Uȴk=x:rV+SǔMϲ'OدS})f {mgmnLKZ\W^6bj/>W?%)!)?!/CFqdsٍFh8j ƠH4k V*D5 #HRq Zj,6!6ź ,F(hP9SS{PR?Xra+?w5qbpV^ھtI>5|#rYKY1HXqS '3~/n\Fo Sx?O06kBUR1 >٧DE3Lc,>K3u—Sw9|z$F Itzz2L?)#c sOZZ`Q󦦙T\^q"2&TA9L~̮ͩj͓>7/ՃM`8}z ̅v-N9:o.=3Fn;t1R Gb|H!'4M,}t LXL;y6B`2f5\69JQG\4ꢹ D )CHwȯ% >Y)t.PHx3ju*"<#,RFXp4&{u=٫oRH\=*grY׾jiX_I-Xfz_{/ +M'^co,{L9fTן<۸sԪp}2S;wߓubmTWc#)kt4RyQ^rltǝE{nm]kTF t<4rBAVxC*GeX\tQH ؇B8f0oJM-a)z<)c"5nΚv}k\w~ۛcr}뷮* nNl^[}=Cp>7T;y]]J@ǜR,K+v;>W r9KlԛịUh4)@REm6*GT KIwTסZjg=>wø( qf`UCW Z! J%qŕDC(K+u0 JВW J*qTM@@k`ø"(z.?)9S8FZ~r%\ЈkCX3((dQe?~ oNҝ3qA@Uc҃{JCIPQeM&ZKkQNj ~~yv'-8Սh @i/tX/@X?{/!c|RhQD9vw[aRd0Nb'd"IeC}x"x7ʎ1\p%uxp~ej8{U5j=~eKgיڵvfwIŮE#z]4[?,%xY;m﷮@ %ljғITd)ݼۚ:/;l#ۤDejl{JGZ!J7:F61=cVmi5 o#W" "u~Y6Hԑufg|t*bD[L!#N_  q"aǏ߅޼`/CH&2g$Kv@F4, \ňNrFtRƈ~F EsQH.razk8S?`Z wV1(⿴W-P B'K@rF5lM1fVʏS\s,{8x댹t|˜ukZovS*];G_FSJ6ZcN&zG1 Lkl6\7`R041oLo[mK6 0SEA7leS^2y!v{0o:ѩ8&:y[z‹zszc gaS3~0eܨHM4ߺih87ՕQ)M7ALR1 {/FÃɶ}cLtQk(X) w91ׁMxA :~ҜX'qԓ8!r'zRan'-0D.=){:]Գp85rW{O1u1:YzmC?bRIy@K``BNRR5YʉNt~kF2[Un';ŔyLl:|w>LFLk])Q LYZ}*Yd#y >z=T1Fub1W Tgil,ĸY:y,U^͹*3*F+oGT,X5, vEPR:{8 ^sq?VSLY\\*W\ZG\JK\[+܈MSU¤(*d?˫L£t@Gfk `!\yD~ώ-:6>ǁP 1sIF{E"hzĤ0(S3 MX߁K#υy/@'_%cvZI\ň<s/k e K34xknAK\ѩKuu^u)րŁL!s9!3.zry2a-q2EA&#,.3ۢGbI,}4ggI2[ٍK%?Ix6_# z!JO y/,%ɩ]IT_I}ciA[v~QzQ {[EhOiܦ)tR^gz\?Te 4||%T8 l-/"/))J`*rCK,s$#uMemJݹWSr -pX Z,&1bm^ dR\m0hrпΜÙ c6y9cY7Nc8A7$UǶU@jK[mnBÄ[Yn[K<]5]9 )ORnHy|[h4eg[`.||6%`tc fTxcc{ Dv2zuЬ3X6LU w R ɰN[%Z&qΊΠ_ySϠ$5(Hc|[2h;,5ԫ*Mm :FT07CcdƳN1Ɓr`ZԠ9q87rl[$e 鬲HX"&Rb q{g>QzTH=^``͇eo 0eoQQ405*citخ*m׬63Bx}3)t5e[ &!2KAn%p:[fuo%( jn=[fRN_fG1VqCk߿ :XmB-Pb7i&h•1T)b+t@Hm < K#~EF5{ S9"_N]a꺭KYuÆMϾdR޿0֎ɅvDJ]rm>t>w_: _s:_Onaߖ, ȖԔX.Ko6Z{ki,{r%uT*TZ`n]&O)2\a&r#XYΤgQvtŚkwW7خn"of2/憁pxi1z,(^-=^J]'s r6ȹQjEآ#aREbX Fm> _[bVX>X6w#%QJͯ5([!rFAn `hn)1HW76vMeyس(sd)\c$1XD". 05ѸDLw̅E7ta`? ͛&/K6 z,c^![_L&7)bCշaZrx^e^'ICJ<$Y Dq)uRɽj+i ZQmYbCDNcF1ZlH4w ktۨQ1,x#YrY0.E;Xj␖!-) r]XdY PI]&B*%Sn3A_NP"c K/v)S dCXK6o_4 x$ )[X1cwqI4eGDW bAIҢn2=>HR Ev3"^ȌKhlTjKG9><:}) -;M,RbdET ^ \6aqGYT XU;DZRǫbDz]͈nռaTeϞYd Gmd[Iu%mre"6x =^ߕ9$T^؂h1̨0fY6Y4PN "xKȝ"q\bO*Ք.9rJ`)it!3qV#r,L3c,4=>{Őy󋺱 \pr,՗Xr{&~:9-Ζ!ԑT!\\ XP2& H,nM8]!+Ml϶'/ <7%1Jul+qV#v<k嶠vgXԶQgKf#Ԡ}P!<&Euj'Lc< RΆ:ÂƆJ`!$2d(R.&QA`,;dC/[~_q[3x,"#2gE7E d_`cCN)evUI(p0TdPc}cBc%sPL19 C ؓV`u :FY񫎌YmZLKc\=.t4IQdV:e޳bIH]?lq9pq[3x$*-N{0>(:)hΑ+Wwb|øI꨸>z{CێpșWCѫ?6ާFA Mv5 5:lL5 /F_V'~|#=9vdgX #Z'˂T*ɬ}l) H !#ft(MPEvkL4:@$Y:o tt>A Qk$Im&'e )kQ) IPLlhT[!CU(VCOvtzr c]kM)8_R+Wh5;:'u.FHlld z,!QoDǗkKMMRT!\Ko!"MJf+V{oC޳XKJfT8B [Cz̭},<[QAX#4ҋzd|M(sf ~B\SiI%9_V\& ^RW:_]9.#eCPo0>cu]=wwev]{*l;~r&׸}SBI!PgbNa=Yն4q =00]7 ! $|47Ea UђS}Vs*v!3Y far@kjѬ3@t2M"gO@N] J]=ਜ਼<u:ĐcuX fܸ-yvn3|O6DWA%6 V5>8l"D܏x}-+9(AyD6e!JLĪsNF)Eש%11휔I@ OBYt$ Klc"g(m&]JKb3yq?qK . ?8W4(ZɒnaY99hCHKk>&Ш!u@ɛF9"61DՈ_I")S(y)Jd^D$ 죤,#y|2KȘdjT+͌((B/ Mb0Qz|-SBw&jJt>w i7c̳Ip1%v?K φX@ey64ͮI*4G5NnLԞx6aQ4_/x `7`@bP C 0.DU*Wʚ nŞ(҆yP)ϡvy elmVF8fW4=obWB.m|>P֜J7][,Krnp+߳"q\>m|gw2ͫ֠]kw.ׇZ|q{wίy_ 6iIK+Us[=q{^Y=R>侂FR[o7GNwt]jM !؋*V6ƚ܀xVR?I7[礻-rv8隢Hm$E%Kv4u٣NG6#TRnp| .J1KL,#oJc>I8 JP6t+qֈ\69菬4:vr8]Hg6/IS~ij4,7 $:ܜNW`LJZ2i(7@Wa'*]afofG](<샏.&.l~/F$q/8?}Ӥyyz|,:gh%3BdA䉉mY; gef{'poSG:#HAvNoI;9@9mvR]0yņ)]=~ӞL=S{cEͮ&>w/F|_s?~7,]L ހ}j!n8<E=~\sM^=2^=O 4cʹW >(?؜Ixl*&B` Qv'7lyR8V[䓪|.} UDQTdǹn*Q) [MZUR$QW٢I>dbeRL@x8#CAٍdnrgvgمNOe*{+sb!P&R9FoXs!'U+=aԑ\-`&t~KARKLV 3-)yRh~4&OwC͡0D@x㹭.ST)= G"RN9[B(<P0KȽ>vژF tń]ѫE*7` aR<[Doc[dc#l ަX3=lkŠ~b3,7iyWT0@6{"K:t l̦XWj<7k)%&PF,.?E{P}]>P_׳zu=}]ύ?9Ժ#?]xAoO+Vu_:##~ija>& >35LLIK5>&_u$u_Z/5JpԵ0zXtALdp7 ]&@0D*Hiư,[Tt1S`oeTި޵#b>⭁yl02=XdĖ=x$lSq,)R_U$}6 cqflyMu1uS}k0et|9} n8_6CY{i~="蛯+PzIy#%@LvtH.'!L!dDBTrm i zR҆e$FO$ƚP$/ecՑ$['j%f ^ʳ=5ʪ[㷟Y ;+ܦUzXmW V+Y[-w\qO6~> J%Π8w[Uʰ}sz|^4/hbSexCFgAIhozi5řHExijTYO0{/H653 Jåe(!)^F2Ed"@֛A{Ң𩱙L*J';X ^zD壈Lfޕ ao{~T =j)K@z@t:Ji L-2ze)Vs6B;`NAzRx\ه[*Lɲ|g xvvl"h{FlE0b4*BŲ [XFhexA@ ںdVp*dW/HlIL8/qy(lu<]cvj v݊6ƠA0@TLl`"t贗Lc RΖ _M[x#-dfjXXI$Hd82Fdب&[{f%jYPx."ƈDqm!V(,Lq\)̦`Z V 2 KX(mP [BXl-CN2GU*`,Yf.SbKZA$![nYg3"~>%qq͕:bha׭xl2IV:} 7!$RV׷E\. 6[C%Ἷ' 6 7+:ȍo ~q}, rsTJᐡ&yC{H*V׿[53pJĐ;̿5^u_×l8ˎ!]w-uyӑebkw|[@C)Co |:qb8 v03Oѭw}Y*w-5+M6]qQm=yu>O!ݵA7'&'˞dr賎2JhieL[SѤ_\PEۜ3$p^9H^BIRdE MV5(ឪ:1PznukHP,{ᝯi;x)[!$)1Qq?afEBS%)zcLޤdt.&[ͼ@,` VQs 2 N(#"(5RH+G5,RreL{|1W= %?މT4Lf^E9-QODNsVE&Hre~d1}2L/z3=|}?*/_ggvȘzgc %BN\+ӻ89ݼ^1?[뭠J e&0Rf\⯛ݬ\/n ?i|"Y_/b|w,|xfx5G~8:Uޱ|'.f^ӯpvzoYg4<??՟~R_w? \QW`ՋE/G_kGyn9o\r+D}T(i/_~?S,?zgul*p=Gy_XlVoh4=J.zX*/T<C<:`ylcy\߻|/а!DYY#oQYa_# ZmIumi0}WӍh'-x!,PtLrh1)ll.$iؙ塄6[ /}yxmc\΁礼*|4W !r*4.!v[!uƜ\AMEC @-յ)Ր%,9zQlkkQ)dt(sRB7e$^:Iib&:5I$1FIf 2ʢ(-a uf9YJDӲg3Oz H[0jtܰߕ|wyjt5oLvE6Vz{8WbG ݳU]>.0#9m@3R^wvQuOI }Q"oJdFHXb+9v(*QMB2K& /h@ڢB^HIHYcBq( mbLR"I3+ _Ɣ8)cjځwS^W/XS7`~/J,h&8PW>nץIrҀ4;K {:@wu MjZE-%&w!Z ,hUz-23 s!LvJSY3a Plg=^Y' Z6OrTMY||޾vr\+"}]Jw]{X Jx):u?ں^=Ox6.plrˎ[.mƛ;gآ{-ׯZruOׇuݍy5U9|Cwlwh=ۥ鶑r˻]8X=U?桬諤-6%zmvqݤ؝i _V<:YgsrW5:N;.hG##nHd05H}$E%glhs0!z1LFSRƴM%[ YZ*z$1KX2OJIx!`luiJ8[Dp g+x3x=YVX*p쓦—k-h]OSYh}3Whcv(2w7xR'˙xWׁJC! ^$R'Oxxu@$O5"y\AhIsdq53*h( _y- Ǥ% (!G"jM!fLYF&NQy!e%flQu~lndyz[ Qœ3h! rR1gP#ZlL-v5 ){mmo7)Ep$Lg:sdIZF>7&8m܅PW.`R^yWb,XL.b0 .SgՐ00uL l,L>t$@e+롌&ŻU(; ,^A׾s-3 ,a[ ǁtt泮yVJiqMq vRpң6ޑ {]j\]L CAl,B.Ni`!#tɇp vTپx2)yё|@1_bohx:MTx+DNz%*C%ﰒf؛Zg˞$[d|r"[#쨳`a*p{z}xQ/ :ƌGcIwoRꛝYq4Xr6&aɗBɉF{-c4M  2[d} #}gba -R<jm&Mga14gKcxsroTtDLB?F,ТzQ-tH:be[Larr8c:)SQ bJBH69r  YZvu-ϗ 00DüIY}.;)Ɇu 84 }l A6 `D QAVUCirҪD]C3qi%#s:aCN?Mԧ튒sw{[lΚZb5Z1%MFy'<}Ⱥށ(!fQ`DA][xV25!J֖y!4c% ! +d[F{jEzHAD64Ac:s|eq@zRbyqô> E'h1cVZL0Wևbn?ZJ6 $Rڲf_jU獾?ga<iFkGYc4&k`EkRks"Y`?r񕒇D1R204zβIWl=+6eI#wSd5bLԖY4=?bEEg&m'Kx&, Ֆ9]|57H!ClKi_xqwN8@d2sҚl ރwfdΜiK3!H-'t ^YNƦ66+y*P|gG=E4s<#Ęweړd'FN\& 6wZ+q3#}ת?vO04uFp&lUhynO﬎R]`Ec)SOtRP̘UO}<)n fHRY B3] r_(Ch@рdqFAS{yNqНzL}Yלfϔpڒz㇜W\/L`Z78/ ?>7ve5D?azߦIn}O?O`&泯'i֯zwtcQ`}6Smg Ŧ/jCM46Ki R8+R\QTB eH))zSnst8QKi ?W:egt]`@ŶCZτ4vnåW?G_fvc M7't*u`1`Kr=dG}I +y-D# 9&1s^`B9_Yn|ۄcqE=]0։$VK ¡(Q *dJre/w-!.辎۷ x8{6MˊM.V77P-f go ey;$wΓy;OrIduluΓy;OrI<ɝ'$wΓ`\Jj,rRjjڃJ] 0Šlgzbf:KU GP9N_Fep<"_]6],5یVЭbjdVv#{IczVq%e J&"IIQj :+gQDL SU*8yE"m6S#dF#&ytc dmjlAMZZ^@Z} | }; $9rSIX q;a;ǐqᢵ"k,OJxPIYŌ7dt"j]wB="%Iʜ ~ R* rV99jjl$?w#RZZiջW)ތzWQʪq,x@=8?>\Zn'w6&7M"J@K+MҧedI $u: #HBuQ= 7K01J r5$*@vԢ{AcPĸ8i° |0nq<,T!gk5H+IQGGTY|5?Ҵ K^[ .xЊsdEr&,XN 8XT@E'Ä>pO$J;V;$WMv)_`Jm&G{a~5 +ý AZɚ+^9IS34I|<-0/H5e~=eԒ=42$;dK.P8b3.8`ǦkxW )fvo =geFz}c9NL׍ɵF&Zp'|'L=SwxW *,̾6D_N.o^j$g/;VhýsdKZ+NJtJc3Z͋~muMoo. 3?̃tn"bϗ6 ޻?\Lo#)ʑ@-l0b0jd$ 筎,XVj>1gɺӰrT֏:UzV͔V9s)+bbrL+Gt+ޘjL;{qBw??Gчoi c2-Mo#@?{C统ڰZCx󡥅m5z6یk>q֠ {?_~~;L?DZD/^ZݟtjU/aWlizg]T.UkYjU% 0 /cٍ~F&.c+m %{lVBYPА% JԫkZ^=?/ `s U,kAo䂒3FTy J1ꔑnc'U'u{\6;{̴U^tKP\0=O vWwFЮ.6JOr&iB8omT&hP`&]&,; @8)WZe \ AH:VIȃZ#cdpϳ h@ЧK$NNn`C,Hٌ*om]GUnWdɨǛs@Og lr+űCGτ,hnw>h^`N*@dt# xhXAGu1H?$O A"dIrrc";1BJQ"CRG9}E,\d:Jq!Q:eBQ.PT\YcY`00k;FfHtW }V>oę`1ɋ; <_rx&|3Agp;N̗Qd7&XA:ՄF(\WnO‘a[N)a Ӥjdg*!Rو$ 0:rS3Y:%.Z7zMۂ5?omn mX=NSWYѴ[g?m4,l=*y4@7@ (7bg'\4Li/t4}3ֻ's_Ky>\O?$`y`3/ Eʑ%Ee1}ز-ٺPV@pb[IǯdսxAd:2(ߖ|]WQu82~9JoԬҩl0Fc&aS5`$c2NOS-)mb1E"sòFe# ZFN9oe C;0v TD ܖUx@"!̃ vJZډfka݂O<ݙ鋵XB[ӅӾ-Zc~M]{U^C .bzLdu=,x2AL45Bi@D4!@`IZ" g[} -*ksN#8|DX0& hY! آHni@ Ru Dz,G8)YԜYJ „W֝Puy|2>b+%(Hn*|"TΨ$ r "[8s*ȐlyxlO%H,E"V *'^lu& 34$uIN޻6佣'\?:8}ULļ2&:J%00 k&BY!7>8U!k<خ18C֥:DY"Ԝn ;d2(hrZdcӴ +3;eںˌ9ԇ@WhX.9,ipN~%&$zVr}k՟y;UPSb)Dx4>z ^@fIg_be|tTE 脔?]<41t۴}D"'>kʎ^vW|/;fnkG/1$15(Yyev<~[ݼFW$eMjtVoBD4M`M%^ף&yLmЕܻڱ6/M)^$\w9E\PJrZ8f;$g0SA5H )K1L$ <J&E#g*jpv Nd:}>)I_]p_ްH1hӠ=\Gܞ@}]B0^^|gxVA BX|3m4Zhir5$EZr3I0'3 xȐxZHh<ٿL*TdA(!F=$uRGD0)3.TCc(@ M F)Nڥ21T g @ܞt)x[qz͹x9AvD% yї-3Ҕ_?+OBPyd6ǔu)i,y l:Sxkg+֘GIPbŠ ,G 61ebZ{[N,XfB9aIzd:4ͪ`y!->NliRCjatf'0Y&.`~zN]1&G חe˭<|~ϹՉ ݐkQklɜ9$#.fRZ"'u ^Y _8kٺ}{뚕{(ŀ$O Hdω@V ;Aβ^DɰEr1A[.Q=U!2cjfHm F~u덍 N%LvvSͻo&&\<}Iuua빪ç9J@=y3\8s8W#jA&^YPN'\M,h@2lQД8^%'7ySI|{BgJ{Y4y!'|!GV">TCM&Nλ u%=2$~9̢1[N$g-uWOo{νU`}6tgwjM-:k@.Si/#4~PyF֝yVוgmyF(5g}ͳwX0gvu \BW-@骠TwHW .Bgu"J^]`zztE|;~l3?0vGP~Zxs~Rh1 OWN=^G/[+K(~+߱Q.eA) =?\> &mh?XrQCON 7GXZrӎXjxV-@cdfUU GPO+GiG2h0.FJYrJ:8)R|HZ׳N=[gȻ^V9U7Cn 2!&ÉqGa;tţ!{4ޣy3ܱGFs;?]ν\}ОzFWt-=Jtuhs̲g_ng誠]+_R*(AtJY$ؒo+t/~Xs?Zw6;|, A'VLt? \Ÿj=ZS '[i?:CWUAkmPc=]CɭU r]EWk}0P:fzzt$8:DWJ*ƻCWnwVUAi{uJK+:d3tUΨִ Jzzte@nT:3tU ]ڴ Jۇ#]Yv)N쎺*p ]WWwVGrs%g *pugbWm𑑫"οou?kϼa/' nP`z sCtE9UlW誠5tUPM]t:t^"5_nT$?aI?\?Yqv-oOs&eMjtV%[ M4A6 ]>x>1εQ>qh 0,.Z JrYS3_1B`n)3~EYɳR$gyH/k<.?lh .37(B] hHc46PzA;Ρmq(Rߡ $'ɃLɃ" U⨳Y2b!1SZk| ZeUIɧD`K p0H\dL2#NmЕh58;:RvҰ[?]O'hhpӡ;Ifjp^fzZVr̅“rn2޲wzqqNkp0}ﳸ\5?7ܔYɐDNBXd,̚)dȝk1*DfTeliejyL HQhSj5 hCɷce.9Vfjp^f0ᚋ֮6:emSMڽ}n;0(RA{WƑ _vH}T_!z7 }0(.IVI$ %ad06T,iB8̊&7VaVYŁ,dB$jC&!ZQp$EOoX8х]w J *{6zs"+90(GH]u\`r:)~9+sڐle Y+ey}LP2ƃJ*f,X& DQvL(,IRL EcRN(3ddɱiEk\ $'OV^~hk)xżʃc~A^͂W. &YK+Mҧe^9MR ,Q9$t^ U\MJ8-:O@\eSF0FXa [usCTE ݥEcPĸuzuEX: <,T!gk5$9Ǥ#Ojq*t,?Ǝ۟jkGѥu#xRqNhH猏9"mL KP:AjWRakU  B= D(akX4"\CڑD|?H(|KI.:ʉ '+5?4;jF%k8-ʉN{*8F$> *0r3:G.O꣮LjCXGdw˘,rUБ1PҙK&NF̔,\Odap}28[qV^f<7e"n\O5-M״ 6 .tO]>a?֎xZ4D+fZzu:^$g/ՆN^=Y<{E{y [BYwGw㙚ѽ>ys*8K b.0g糵n+Eju~XI1f7 ;]l|VgĄZG U,;pp=_tixWuY5Srr&es)7V>2ՄW)?˙O+,I5&gbtN_~ݛM7/߽︰^}CΟiKQM$hh8#sq[֭~[K Z͂s_#o˥n /Z|7$`yhH.>IVqBd_|CI/6ST6U%U"*xt.omcln-T_Dx "CVzڢ# }C`_堥Fbhyj̬'sW1:y1ikSDMfQ^ zC 1+eIVQLvV;^%5xPMgO=C;G{*o%s B.' `]= n_ b4$W+筭Z_ lqmr>o[Kꐼ# 8'%H?/tH.V ZBױUx5!MwoXZW`PM>}$;(hLHmEaY nBV#`Nòay=,0qq^4H  %-LB@ܦldHf#$CH/e%_isDl;nvn~ΒȭM.{ }2{;9!2_m4li9.HYOH#+9ۖ'o*q1PPno3{i!;&n(.["9}#O;w$,O]g)U[nmLzDުm^j_=>Mዐ" WTTx Fq`gHGn38tźHܰ,Qds24e) $sT *Ǭ]+nKܖU<$"Qf̃*yҌȬSa2]h-r\c2dqyn3|b+g(xH;m§mT0КUg« a %.f [*&%] L"* Ci,.aFpKC(թevY׼ AIRԗouɶW4?(fQ3g%u(L']ޓ'\tz;*#2rA+s@"LXĬ&{i#L>čNT>X)cRq #7KCĂ +@{dDҜro2a[Y'cG$cc3|g5hkQj`aPwLǁJY-Vp2Y>6$& Mt0*3mP`<\. VP(&#j%rRVeOvg|޹ :zk >:LSK $ZVt-jR:j>NJN_`'Q$?@@Jz)XzK .ڈ!dY UTACR~Loй,O_`B^J m sAI+i,jtb4d} `iOADʀYrK4LjI:sAx(}$rzk,5r2mM阎NǠ67f%;F<eǯC_JA[G ,*3AG[?[;h))*'ly 3Pq?=}'V wU`UCqWegK:[qWH4S*qWE\]]) +cp@l`UCqW$f{ﮊ讬rj)r1韖.Y{c?Hai4L(*y1HG7Tyco8wy.Wz.)ΟLx~BcUk}í3ꃑD˘3{ ۝yX ]]o~N[4K45Nˍ֜w^أ[%'ͧDIcUcw-N7ΧܔRISfgd@3W+jPCbV ՊZ1T+ 2jk$0$`EZ=I\\ k %i*){%암WRJ^I+){;|hWYY+qe9,ǕU2r\YcW*qe9,Ǖ帲WSY+qe9,Ǖ帲Wr\Y+qe9,Ǖ帲Wr\Y+qe9,Ǖ帲Wr\帲4Ee9,Ǖ帲Wr\Y+qe9z^oк :ֽFҕ[s6mLVd[#!- N!kʣ͟fvc M73a't*U Z}ڪ#6Qo'{G+ ,x2̑9ciuLd"\\IO #2af|Y85W}X|Ÿ։,VK ¡( !8Uԁ gɀQ ^ZCr>㝡ov'iچk2˗|e' \C›7(wU) mwg1Mqƪ&hJõmT/^RlHҲv}M"ixc}oKlkMĿU2N/k,O4 dUXN{MFP vd  "$ə32X!xAJ:PmxNanglOn{vIGhw Hҏ MmBq}~z^}w! ^Ysr[ | |ܨc$YssIT +4"'4Ycx_$$4ӁfCF~2UQ\02G0VdlܐE'C@-U@s/R{l8e`eV#υBIk=4uI[|4Ne1 jך61 |t){X@UJ7 09#wHKZ)AjH6B* l “aYD"IG1tK#W:w)_]N+ڼS k G+ýx/\O4"xG\ɲ1NzÏ-Of!yY,\Ta|N.?_xFve~=etkGTM$wd qK@IN1Lc2xBNO]3;7ե+iV>f<22>7Kz!M}89oCJ?w[G2{Kf_ݯD/^HLqI^5ō!_h6?.h8)G76x 0zמivuozyI3үm7Wف0 O/9|?>[RTmŤ-z}{cvHe #?вhd0l}L>^_jYriX;`w:u{V͔r9ǹ߆Xn}1&{%#ppˏ.7դOoLOb/^Woo޿{.7h RnN-v;pk~C4|hiaeO={ᛌ|50{ۀskk@Rt }Dzx9YӮEY~]]R/&3?ǽ8E+nJe*օX:܌1/Z{˿mJ ٬0( 0 m)*1#:%-K5P<0yMk>1{xqN 5}?E, \Ptƈ;5$E)6F22Yv:Y٫V{w/wwqdlakkn5}_^4Q`~8"M. φ˝L=?>>Bl|1+_~~7i%O4-47f|:( w17ǂi,,͔O.Q@)[}@i֝2$ (P'] AFl< 9*I1ǐV٦NAt_a ;ːq"\}&(SA tp3q6 (t{M|l9:w4i(BD%OT*Ht:d`G XY*,Ȑ1mbcT p( r3!h5jB.hcHZpeg3q6`!9Aԁ⸙>:;Ӓm/_0">Y̥:rrF вM~!δcK(Xpv/Vk|9Y;Gnd -V=_W~TV?#ƁO;AvZuYXI|lnӀ iPZ(<$+io4.1qLQ*t`<&Fӕ8,jOeX9ׂJnQ( 0)lmNe hKlNms݆ d)7L>V^l5J Rl Mh~":hu_=W ԛ͉b:"Xhn q SبmC'f5LaY%n|p_Sʭ7I.w'dqzdDҜ. ;d2(hrjc{dc#T7CoyZ}^ ҁҁK{ЋZ74ʆFI (Wmj(MРn[.]?fr;?C*T9"p`/촒g_):B$U{+danȃZ#cqϳ h@O184I&TbrlzCaW܏n:}A#OL#wmI e#}0]wCVH,+A[=!qHJ3lK3=GU?U]U }>/tILÎOuY^7bym$1k7=:)t`qREY/D^rÜ9Q [cI"khLhH$ D Q@"E-4,ZKRa1a'9,c>"l\Pʍ"6bEw9lV"V_"D4Қ"g=$/|]Z|+s0^c`/_[3` } .5s<. 9 z!"R`70ͯW?ꠘ`FX-`DX1 @0@@Xq/ QEI<CF6^ ^znóT!E 욪]x󪛋Q]5VWjRc 0^=D9.JҫT:-w;vhBJVqWjx<+ j+z=UoWɚ({=SaϷztiVݯn) aV)Kr{ UhnnLTT4^.: rNq&&"(AYs!ϙ >O9S)Z0*{RՁ+tu@J: %\.yH䠄cP4^sm` yIRp*S5‭e*h%{9A N9xL!)[f_*7ןEwjꓮ8W6O SٗMT.k#N=t4PlLv ^%ui$_$>ÐϽ9K9Zb?].)6%`E6gF-bѪ8Y :v謢ƢÌӒ=yъO8!rzIa~0->ifl"c^ 1]2H9/`N%BHh`H]!zQL77PL1a^e&B7E&߽.dp4aND0/fM RDB&*5mۿ~ho^c.zuh.ˠaΓ cD(cgmI 9Zr@{Ã&ֽ}}Ƿh4~N eev+X\ٯzp5է*vNs|*3Otc .BWL|2do?l$ك|v{z%;-HpTY5 r!ͿzWm&leC,3K6<7eu J#o m jH;5sBʭ>5]ʊ- %\ uy(#ef"<"LzF['z?9s憁pxi1z,(hR TQIS1ϭׄRp6 :_Ї&kβWdx}{18²/b>Ĥ:7179?u3lEE`{x`_޾@K5-pcpi˩*G 8C^M)1G"-Ajʈ=` Xy$RD׍ax/wŦtHz˴t;&,0IR[;9R'}K;~/-R2x6hh>a852yDYB&PdI!AFbJHe V0<`ւ\=FgGA^OX™aE& Xf,Rg2DЅ h$B(;uGI4 viX04FA:V,I}<OJ{x,Ƃ|L#t3a&JƒLQ O8=$zayA5* *h@B"Vi"AA?Mj%O#'ui_s!kEq̒[@l$Yl16+,2Ņ<4m8g#x%pq^Bv@L_0"MG!uD= 5uz]㰔[ETH*HE{ل%$YM,ݻrT: O?eϪ bo;>r^4jk%xs&0T1cgJwOg|~|uzp"mXs:Ry*7ÌozP ү6cɍs2؜<5 zPyqb;ʚŅ/^p =E)#?Nm)c3uuk"4 &uD94&kkcPaL_G i#|ڴmZiW/6=p~v/,f"gBZ !&dl`+D(\!r -%& ?} v[sN[[%t٦ܳxس(sd)\c$1)Gcܻ<3a-N Z+T8߻zĬS۴FwtBxXA)#:43Q)9gZbD+-# *,צeFDo_Iٛ"FxZf.7Km<~ͧN^Z+,OSUh˽2(r-a!礨gN1%}{zʋ]G y oh[@6]o { GRj !l犣tUFFK&D 61.%6!`И=aViǬh)A@8q`[ja9AFhw >Qq9lEtL{^&էW뫮jeѼpʍ.IeC$ =`΍QVc 椵68iXp 4>A^EPhha5u)UFǤ"9`{aQiƝVc  0B'LW4c&w/bC`֍,+Fb mmBJ*ev " sa/Iͣq!v?\ڟjkڄ,1NhB6#Yma hERqXk"a 6cefiwZB9l½D)H J O~I:%>鈽ܝpC nv ~Ϝc9lgS deK0!aI}K5*J04`FM.՛P yeQβ{dҵ+Pq 0Ex/SwlFpmwǣ1y_VVt&YKSr HǝEzD>l::ƧY/p L!HQ}]VWHӓdQNAS{|6*MV/fZS90v{%l!ax/20y-,M'3#Z~yWf|Y>x3)1KR=9C,O7ͥMzB[ږRҀe-]5C_,Cʐ2 F0b]5УYxUmխVkq:9d9,.U}3d?8OȱT>TפNo:s%,ǿ}z޽M~??~zwۏ瘨7߿9;u#0.Ց $b47?m޴DM5 UMS6i6r7iW4v uvto-ҳ짛/KwUdMqf&hŝWׅN|\YEA?SQ}!B`@CK+u]{}ס(c^E,E(x"5Ey}VP(r.l HU8T2//C:x /X0%[̟$MK0@.㔉\FΕ$>DۃFv*v5Uaxaʵ>M:և|"2y@Bg/>@-wQ룉R*(%i|gFoR<)\w*؍\Ep|?NjKm L P%V2HՂϊMp? gӉ3ѫ/: mHAL! 2-.ԗlG&<WQ"$9J0to.:j٠x ߌoDBٛ,ӄLb5]wrj@!:ɐZn?sJCoGn Q>\ap_+qф/f LqY̙C:Qpb#VƖ}U֋oK/!|$*JR1qHI1b"*km+G4'![y`vyEd;$ oHe[7˴} eXYwVۄ?qAפ(v[s?g?U`zwP2̇m1f~dLU>,cy{S`zK"/(ĆKu>&_ud6sDݔhDi`"vgYQJpԒEZa`tE `$lt)%"妧P1, C`.tȁM){OeH$\EƧnC.'~RzʽA7p{,Q>]l}HxͿ\~)JBV*aTVV\҄SHb.7ئ%^ -CfTqXK2:jޖ$Bc n&f ^jg}z<*Sn~=])zW򕘥E{f/M862)1IJsEVzRA|yJʝdiO(]SO0QɨL$:JlfJҊ9s6Hdtq9_pE'}QH)%bPEB2U:46G%\ Mֵ|͔frxi!f, Y[G*{!)ֱ 8 [ `,Z֟$Forݜxz?9¤,:^RB!:+s-!^)M҆u/)z5Vo)-InT;CE8C 2jW+ul@:6ɳ Lں|3h|@c{7i>uZ]KsE]m+a ~J&UO۸勞;{TR5Jd3hi,d &a⩮("< dCPIFr^0 AEW)zOF8R"1+#Ctu@a+q6[ViH9hT i)S푎 {},t\*_MNtsgr.gdwl\G蕬#RutryUϴ lW*IDVLt0F,h=$2/e6̋DY VGi1B:')I"$sROm]N"@ K{VTg6~Sbdec9@A}`7hPAs,9fPLؤRhT\@mi G>>e#>+/w.VȵM/ܻt]^?;R)nqknz}< ޴Ypz׏f!uݮ=6lBVtX^H͜*wyA0\Ep8\Lۿncn=oۼ1|HcAWD۸:[(Xؙk{sܯgBw|-NPvXCL]u/.Q]y$DIIw"n$](R:j&ᒉJȊPS%㣄uۜ-5ٟ2HR'eo4i&G k*sg G.[wzc*WU^i_1h0Sz33\4bF{Up8oXYr*r]}*xf7NU؋+ /pU?tR •Lɱ6zoJkJWo)Uk+:/p҂PC*4#\AB#\Uq]wUTWfGo aW 'ez`uQR*102+3ծ^2GpU~uq:/pҺ O*#\ErZ zzqT;ζRa2Ӓ/VW1].|C;;&GGg+c$ݙc.B]X 2ן^;|z77+z,̳$V_go?_V5ˇm`!e%KFlsqɒ"XTVG1@%L 7/tA>ez1?sy ̐w;g᳾vgy o?y$,J<ux ׫F{ Rކ}V,:STZ lN!!cP>=b7sB _rru|$L>[HuE),`[ڹg Rl/?v^'5v$CΒƝgؕZj]AZN{ϓp+wg)L&p 1Ӌjp|~p*;KՅ"ӻAQ?{?ĪG?{68x~|/*@|-a=%\gpl=eM<=Krf~-= vak.#}%ZϜ>l}zҙ=52th0w^D۹l'sӌV֛Krhn'YeF)j'LT2dI( + *Is6Hdtq4&o0~dd0QJ8W?s`cF_)]VٛXHr Ĕ zYΌzҰ2ZW$2FV˲2Cih_*ՙ[YQѤ/Gfiܯz\Ɋ'wدX}XNN.{hE /Wf)$ ^آ(tʒ4H2$)rI# :ښ0BTbN^jJ[KGɕf2Ҩ PZ#c3q6#c;[6bifg;>Ux7O_tzg pzzS+0bK[ᅏV1C(ZZ5)c hͬE`۲Nad/ ¦26ɂ'[ קmMIlL%Zn%fĎ@1fǮmQ{`xwe͍$7(Ż%1~mwyx/)-Q2I1"WKIꮎiM,e $+5"FPsjA^[IDD2B2$J9v|kj s(=dFjX˶&HFr0pdoVK"SM68a/ka<l}<" qV9At2k>3:ց&b HJƴ=8&&N2IDl̅OǨ5ɤ[c^En c:?L^wJe$3?$Zsxx;]N;PW5_}Πk|o&++R>:ؓ% Y kB&cH7._Ok~Mrr,[5E]SX흄y䝕ҲFP`E jcB~Aŕj K:8(+}7HTT pժHJR $qCKHT_eWaִ"E% 8DGoR2Z{HWKo2/K{V3ȃ"UXF`lFF t2JŚ$^ EB93_`U9m8x~9ǝTZ"yGyjQV\ֈ)͎.?.ThBlI,b`^Taz&~1?qeI{*/3~;zL״ҽt PRH-8>M0E[AxeL@m⯛V7/Rw]ސ,fgWzK'ou@OfU|̓s&k+jv-3ߎ=5/plƻ0oGr>:=ϭs{'PKm xw7{N?Bw\}uQob1Q=G٦3Jl!uݳL-uyr_O~?r~HȎALA 6?>J3^_}ӻ?Sǟ~}/{߽yR%X'QoW5bϻ{ AbPgʷ' 5uѕ&srܝl.Mhm'=PRv.9| e30+X{4c{b},Y{Kp&֠!h!J $lt"QpbPgcX-`*v>xz Qj:&0IÅ1 >!uW_& LFVs\MFB<~aӢU3top=;_sCP~_[sCZ'bjvOL5L U¯U[%b5Pbcb[K$K Z0jw$ZS4gY߀O./geygoVW~Yg"ܸYUiy^hxbM0bFU ;o9'w'/i=ڻL$ 죤̶(*c2"$e1dq@PESo9(#;ʢ F[)P {>JW_enLGݔWl;'ݔQfM%\{g&8@ׂ>NK~Hc*o{؝zc*RCcﰱVsN_MxeE(-u]RTDo^ ^p7_SpZ+b%{2lj6rI)0[rqŪٿ:Cg2x{͙eV1f ؅w쭐E^8tWm~pcR]Ww79n-4^y6ʎ\R>}4_8+;|ӭogez.2\ϩwzC]\>]_Wtyڙ!2{nC(ByW9kIݤНѩV\3,. Ajzmip䤻w9twN(R߄3IiEQ";Mtӑ9!%Tm?EI2f%sMQi<# gQ FXZI_CrП-ѝy@Rp`5W_q^Ƶ/Ϙ_k-` Sy]3W«hԜE}[n1ɚ5D3)ہJxC(H;哈xC;QT#5I5DQ3R*2g"|Tʂ8ᖁlePjr>IHDu[I>d#x%9'P&(Tq!e%[g;\2]]җA "PҨsΠ[$%Vެ=䤊s%WgjkH0Aw419$Myәʡ (gH[R*oyII޻1Ѡ|6 ҡ0D@x㽭](X-) -!s( c Ηz59L( %,Wuu $l$_z(-IncMhN= Wе8#o uC :4_)h~EŞ!ܧtVZjz @`TVK5j?XT=~z`b|m͡%$Hu)Q؃uۨvN$ fa2Ȣ#YPF3^Yeb 9؂%$SJ]V`30T>}eAJx ƵؐmH UZ򘥅djrV!lz1fnp~ Bw,!EܞF6ۦ]zhV<qyY7|%6 ԛax+~U}U)Cv? G|q|_ RƧӋ:F-u` ElH {;<xRn{ /-AXb0}>frPGN#Ld C}/7C <1y磸pqyl327f}L}o?"LIA,RTʅH tc%[7B(D,^,B]ˋe<0a1]@%L(rJ1#DTj6dta6)gTC%jQjU`PwQ(('Jإah&y1;'@WqH>%{yZ[̾b;#z*jq?vm2/m†YW~C." ^mSxV"ZtACT8Ɣ42xL#qʖG@bQKlbe̾ Ê(%vgB Vx* 6R  9h}3?@6R^YC5:+h*&W֞(np$dw!F*C5߸$ޠ#6X.¥l,h+Y6 4F 2[:'33~j#f'߆%]T۬C@ `W; @tOų_Dk9Wz 0f}btvyuEa{HWiάޏH fA˰0w;z&L@|{_m13FrSS p!:EQ 3['7ͬB,EΘ`8-"t&;DQ &j)jX_lzJ!2`)Q!I(*؛,$1FeOf=X֪Ev-Ф 7O:ǞUVTѣ[NcӢW>`!t pG ]_0lMn?}0 F4=Úg9_a9? zᢸqHH,2Ҍ~+[9|!Q2۱I/ձBE KrKyn=.7^ńQ\IF4\i}-s%XoQo$-YĖ%^TWk 5g`g`Vnv*7y,ZTI{Up]>Mlϯ`:-odgrgs?5/^*,z2t-o*\t4 |#?w7szn "v|~&ܴTg'R(1*EkV`uN4_>l.{3 )tOsSǟ_>!ۊzߛ.k?RHM1̗/_=,L\}3ia!ٛ0^pJDWIѠUR!gb LUNߤuXؤEF'?C<.mE59iu8H0uL"ݍFD-Ru_Hg5MJSqբ;yɫUQ-jYrW+iVL̤o&wLB\<ݱr + 0>T jWmIJ?[&h[o:6~[0pRIUV;yej"u_;z|ĉ.Njj g^yyָ̥3**5mU$ƈ:LTzA=v3m8D^tvglQEQcQtxAƂtvHDM ℁csFa)Sz^@ʾ%u7hkobG[/GEʯ-J|u=V}ǙJ))ͤ @h$]U΂t7r8/> F0\RAe9#)kt4Ry!_4{REJh)‹Q%5Px" 3V8)_eLː Cwq|,[LZgyy~c:֪Z|nfӋ6NuQ~I;bL cNcJ̑x/ ZQmYbC߉JHx b`E`'ʵ 4ha&DD>Hs̓BS!  >s>`.0 DT)J"}߮ټQ(ZH=3NվO8J@[ <9-b$̕]n ,z*Njny{ɷ)*o9D#9TSpq$Tx#eLJ+% w6nUd>Z7 K1).}ԖU`̙R]#cglFtΰ3 EX(Xxg4,i-4VGw;/~&?9bcj m@b DyQ=lTÄF lˈMO !{D(IItY$R:":PT]늜͈m`b jwڌk"=[nY e8ڌCVRyRy/S<\$☁ !l5 Xr0 .ؖ#Fuw O/슈cDTD< ڄ ĸ41Tis΃@- V)ϸ.F"6J,Cd`(QH : #1)DX҄Ij$fX:Fٌ/,\'슋c\\<:=op!l"܄:U 6@+r1F.. v;!AaW@؆te BFn~0"zw~G%\]56 _m2Z#rABy G8"E{݅9$pG6$pΘΘufl7j  "Qpd(BrGR1QƠm'D/rE؄ A!¨U1k,<R5]eb{<*v? /Qs92dQfm}$ ()M˲,hf t.&(qʒ"a007sc'.c ,(  `ZѰ b|B^g;&,HFǤ"9 3âҌ;$vD0<7\ v? {"}, u+OQ _LEb)8*T)Z@87 P'5ƅ(/?+ѴF8!ePsd cV$ZI#'DtqۍcP@֎[½DH Ħx8hPSI@,:bA_X$oQlh?/x09%t1IeA~%OyfElb@X~qS&#Oa2BpP)|.=$O0<ػF.3ў#HND0::8uGf2 u4^Ϋ{JYiXLgMH(ta5JI rLHϝny< |Żfv^OeAXx߇ldz,>+xz9:^SF=8r[!+,\4ʹyq dE*_KN|-=S:+'Zޔw7ϦՅMb8}!~[s V2K@>8zaChH!:RJ[j˶aH04_aQ0i$WDOc.{u'X~ȶQ۞U N3IP40y8v+f|p6,7p,u07;F{ӟ?K?OgaΞ>yzv1H LpF & xCK0^=4UliO]M6.eGͧ0h{]SoRUx^̥&XŽt%_a_rz. ~NBzN(!vS:rXk#-k_e'Mxv{̫ȃ`O$G`@1 @OTAT)vҷv8Ԑ>:P;X0%[$-K0t\)16+I#E9'| t^gxw;1-}SN2v}1lPHNZ];]N.0h G~yvGօf`2yå\x3ju{:FY^1T%@oU־rQh*WMu}{Q /g >@y0I}l 품2B9ʲo$߽.ۖ\_Ï&Ѿʱq_WnH~4!XOFr'+Q2i;wWtcQ$b+SG#jɣs39E\%\W!"LWC`S8܎\-nG-ɭ}5F/ @Y%wK4zυ2ԭ<1PB4 &yұުZ5]ۖ?g9ʹ<],poQP`Cv:^o?*\YIuua UJ{3|O`YYleV"^m4Gˏ2j87PX)K)1M09E0EN93X?p2}ncoeomof$lcBDjk-I+yc;Ɂ g$bnr 8M(rn#"( - HH!]W'uwk}O3G͉N֘.9}B`ݓ1#kqi4@͉tt/U-+h.,ԜCC= FImՂ FIV>CKB|TQOAaԡ[Z{Γ|~}jo(3iu%ﺻ s"ג6G&)BNW# tvZкx:/[s0ifC&]?E2u6]77z7yݢ煖!!vQb3{7l0/ (HGȏ'bՖ,m%i,f#wSduUfgoY6:UY|Í:ӅoZzgMkmg6~>bW'QV[?on߼釿˃jl"?6"}c M/e}嬴BY/m{rݞ;ftGN']bU?-rnYjr992Jڃ&2 Ơs.@vSg+E9x@#I7!58#2g@ZFS%vS\'}zmC1-ϕmNLVOIl!i:M57U3\&1_}$j~e_xKELdB+W"~P@A>H͞߿G$Ot+/"$M:k˃$F%D &ZDDA\aDB-Rp,{Ydm &z@{Ay)94ԝ9e@ ow&Bp/L{KMQGG1 [ʡ~FX9dHTRH>\ 4S=ΙNR h$i%'An:yƒ@'rc0ԫڜf&&DБm+.)%FMS9ldFn!xBrye9&5WJC6! ^%Ȳ*ME%pj@4 kni؎=\}m͓_ i9{_],ПWctrtrt AM}u+DȻrI@5|;73}-~M} :dDb+hZ:7»d 3Ȕ6G lu@([JYrG2A&XR yIC`"sDhoy=5qԓD&?.+O;BC0|`ATyqPjr4g)2k~-\ ۊ쓃b1E ,(uf*!@ .ZcLfϊU&eEk5G!GxFHhLj4' TTYYi%C2;hUYlʌ`eKxZ) R ]6pF)2(Nܥe`hM-#]s%JҋSbvev ـ vP͟5KTpK\4EN<0$te <Χ`QLX`NNRA2&w&gF[C-m@Yjjh iR%&LSKLhO,yiX< OETa p<2r|+ !BI2Fpnַ60`&}5YWEÜݖA $ԭHQ۞ lpb{![Kmxu뷠uڔo1qɘω7%,-:ƀ*iq;K0T$ Hf `|ߥI 4YQiQh Up@"/BR K\ʖ֗'['5B6ଊ9=ǥ?Om*Q^k 1e8nwA89UyOwK륗mT7T7kT*_ UQ䆀wU X$L+ Wŀ$ kݢw%^= *C_4 [6c鄾i׍uaM;Ybgx<,y~|M6/ &n_P˄~-%#ޭ]n0W>W27\WlZzy4Fsz3=~ 7<= ח?8F|Qc˟Vk|KAҢg έ]wB[{ݯ?~KlJ{/g(RNg\D77?' b5 ǃ嵞%Q~!BrտTA ˍ?|}xͶUq[\cY09"gY1_|Yafں<Ĵd{+Ic73ѵFz^5:+1 77 'mrYS3q"LJ]gGػe}=ڿ{heis 7bnefn"Mdh]kC8:!N3~h](+Ul229/~SڧU2^mq =~?'>gEjudž4[{o:SdpQKu{;<NEͨ n7y; 1<ː+0x&ym#V7]GDU Ĵ+@7ާwF5DFzի+kZͯ'OFNV}] =]ܞ~1n}H'f-oۧn?MlPKTl:P `/i:J8J̜R[KgbtUV(U8`TB-z5L7'dZ4I `9/m:t<\M`]G4=>. t'qgؿ_t7iI{vF&\HIL:B&*tV2+Y+pJ< ( (6H /έuUvjb?<3 N G {~q3Gf5;W隳|~YPO ^&Q۫?DH{sQǓWye:Ѻ'Ҁ%0p|[PJm}S,wA+Jv=%W{\qj=@נ f h!eLYܪhX2JI}vuI:R[|Bje*QιEX&syëv(v7i}-p4˯xzQo|t#yjaᬊAv~2ЛRʒ PvYXHލ߫w6;ܬv6'_6IJ#1-Ýp?ll$;f\_%  gү5vd.dk)NuBI@$IZ'OB2o"2",@՚αD$Gmw,= ;ikk:S7sNT'N_k!hK({Azݦ̄HF&ŒԂT" Q)u2  @P5 K.[Q(bhH}JC/3d#g4s$!fҢQXFԅ:՞gA&Dud8b"낁g۞&).XM:Wm l ˆ1'o(ED'F\#!MCq8-gҶjby+a@ER_m KOɔ)~QU̩Te&r 9IHɸgyֵNd6An`c,4Op0M]b hPVa4$A_gDў1T1qY`0XpDL2mYls#slù -R=%mns(J[2JBDJ]Ƣ^El6h'-tjtLr抩˔ l6wFcJ*4=[@:/g !./50gU71JoLZ! cܙoC1}ۇm#faK-0&lJ&80)8ЙVNecX"6V*oko~/"F>[e@SќQ3D'&t(UbK@/:^ TER+nٕH2jbJ{~2YWҥ`rD@KJWAyA3 XmATƂBG1 $bDN W$2+Bw,8P{ ȋ ":t@2ՆGM 2p2d P?`<9Q<|NEɜ/+II|'ӻ:}p[#f!\wy2u1qׄTXN'G ]r"Q1u G4.Ҁ!(݀R"ԇ e@W %tM`(S`b @pAs^ πB.ds2VU3c)Nm'Н:):.m6kAqX(&{3ZF v7H-28\ɂ_a,p0"oX` E2҉TuI;A.W i-,f, yAmJUΪ.eV4+^n$Lzt`_Mݤf a C:kj:mE?Hcn<[L`tWeh2.k*3 ٳDPMF  G4T\1 0[KQ#͢ԺYCu58EyHYMhhƸ~ 3|9p7 JxKdM!u>fds\@7"Kk=xP@ʍ`Wz M҈ [W* WEIƷHN: PQiQ0RXd'X5u}hep?{ڰ(RVGϊSZ]FʄFjuRc |,7H>9.J]3)%&}*` WMIB1XݲFr~\ἨS*dDZi#.Nn,33#,+A-th1E@+B]?Hwr|d8rOVS:(0q˺ %&I%n% |V;M>r՟!VFӥW1.ńE@ZĔP$"@dI ShM$u&y]*p]ꊄީED8ØbU7B4n&뽸q2mJy* #P qy*ҳ+1{Q% +>Pɂ2ņBQ]^woT9P!PKW)\~@0ةQ)%I+`@!H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@z@{ X( sp '2u )F%u8R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)^%Jj3J:7V:GJP @"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H )@;^QUs%P>wU+%%ר^/U#%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ RQ]Za`_մ>uln_l;}49!j;%\lwK0W.Z#p VZI¥At5Zv:~4s񤦸1Oy 5O1?j(c(#`y-+pKeZ\]UQw2LEυ>N0s_:jm~oF{ColGhGnދu/7%+qe Gˁ@-aW =)a#0fy'Su=:4@y>:f˺)n-JA<:Lg[ܽzy]x2υU}6 Ca&'{B0%wTP;h`-gOT+!F0{îLGW2vJvj8vVZ•Ew(\`w'\UsڕpUjiGpk=.VGw%ɴ%l ޫgWn,xtczr6 >[o@k̶ M+"ٺ|DW+,ފ|qy˫`s;m;>LK[RTN vŖ%cU2VJbYAh-R{ĬW{Yg.88p~L9D'MKZke](]>M.-7&]np{7|cb,~DcnIuŒbGླq:ʫ7e^v/& &wu=ZG*..fݠ:O·M w862Ǔx6}u`nΌ, 7m7/Ν$6\;Pq#@Zc\e6 d}up|zt7_+Kirz[ˠ=g2P.~R?5s1յ]?ߏ'Ǜ_A׆n[;y?ۼ[`6T@+y c20?eɅg^xzӒjHfeR )-r{,5CB/"<:* 3;9]K:<+n~86n [DŘC:Y~:߻Q*VB; QHhoJ8%D*6L st|{Lc=+З ElVd F@RoO- =-wC_OF BA!66dRUT0M=O%Êzf}Hq6kp&똽X=8e+{ds.- zx.BoAW bժluƉ.CV{>گum@-=i ,Z{*ܸ}.>[K#FWf W6Wx4k(wPt^1 bJ#%ߡ<ޙżj=ꧾWżB;#"ʽMV_?76O'Ǎ= ./>*Cq]jpбY+c.iJG] yXxWa&3IzB׹>Kq.7s h$T6)=s84}Y` <-uO THr~tvL|2Ja`֢ .:yӳk#BP,\ ,*膩p!}.YcL6(VOhFrIA"BS`Lj df< #UgcR( *FYi%C1o`}X=̹>0cG`{m;>ɧwպ!m(qu5uw[o\P㳎d <%;),6>d"!3gC2=?cY*\c <=]z(|iGz+Ai 41r2¤jX74Lh~-+^FSzHG^J!R8xȱ3 gs^_e68>K+)b h#p!;i67$\4Тc*Ψ[(^o =>xB$//aᖗ*m?м#NN޻sdrzqii#~XF/ϸgU&o'׃_ۣq[v_LWg'OŭlYћ5V)-^fn#:Mw3qNojuunR^nJ/ڭk O{8Ev{f9 $8Whk=;kՅ; 8b#uex,4W4H^nҢ)q6ǿa؄|E QQl{L|]=Ɵ։4VK ¡(Q *d [된.ۗb1 dCx&5{ͽo<#ya,E) l:V}esA'jlGZRb4z?x:j;Q/>j;eY;djo2p_JY$i!g, dUXN{MFP uI)&h,&="'O1I2bL EBRN(PA 8#o2֝RЭ)/d $9MZƘ%n#Y_u,'o,̤;gRF O r5*`@ows@;CdK8e}PD[Z_<\JH2IvYcV ^=DyҖ\h>b6?Jѽ?E G׌^[ .xЊs=YH@ jH TWv9F*.@(<V8!tkN>1:߅)c<%w#J= ='eF8" \HA(u!aŸDJ$烖`0ŌS$K2c^~JNly(L,PNsɤV2r= pW$wKgV1 q9;>1~6ó0<5{[]HrteFGtP&׍wwך.n mXp|ҦPx^Ϲ~'>am#T/>WD3j}%zF`K{ax>z\'l94IyAk)-!8.\Ax џm›upv3 O:shxrͭtWs !Qp1o9p %ŘRɂM͈ͨZw ǓF Y,+d<\Np~4llU[]rSƪW&gVb۸ԑҰ"d˫/fdD_c>t[vtmoTLwḻ=S?w}ow?w޾U4s٦.hw~ޯzj 4lZZxHӲvu!ohWS:øZ[[^7Y,hˌvףl\ *zi6YWl8+^f,sWg5!0n@M^:$?l/O"fAIh+=P~!љ(Y mфn-IK״!Zjdˤ6]jō%+לƙ` r DaBAQBo g ZgHCG F9bhu ;2eA:3WeJ:dN}h4dnyMz%1~~SY7Jb֏e?`3A=(Q[~([& #'T,$냅먣GWG!ȫ[?{wi*tF"ׁͪ&Ҭv(P" Hg֊^K TZ/SYf%,*8E!@DT̲P'8 Bg^hG<{MLGfUݨ֝M -5]Rrlh} ]{YLv<~&8=1pk= 2BF%tJQ8|γX "`q it#YNE5.!(lFύBE-}+Wjٮw/@6U#{e6XS.C>Rr&sNmЕh<Сd6](0Q]<{XM:vյ}=|7]5A 9lQtO?>G #_XYņ7IllnӀ iPZ=M7^/ߴakDFhrAA a9q=.af/\eD I@8"3ԥI{qeP 1 ^;Vw .wihX#{ *B#7x)N+yF\]#DuKA @rd#7{e@~M2Q+KNn' Y4HA#kL# =خO|cb*Oq^4zVby+w5cWGGyY WFcuZ%B.j| N8~?^Ѡ:"Yj}ߓ_Zԕ\6#?ƀKF=ͩ;>7K 8o+ьLG,-JOx 6=Ƥ*6J_*p]zH Z:xT妑~lͭQdo^f=wf0A|DBC9M5؄Pr!Vy<ݹݰ ^)B+pI%T;O%۝F5o9ݟuP'yV /grЄ7 gWWRheu"Bkޑ['ʝcHbfTzVZgGBvZᄌ^9)MDjrB~g<#!3`Uq#pJF *N8[>|y,Z C 2|^9E(<LHKΪSx:\EW3Sy1Vt9G-Ho_`vrw2J{0u? z{W&/4\/Kj<5iFPԘ.cux(.{ xb(z*aQ,~**׸{}HakM|mDe{SWtHOI=(7JyhNלZYmC|A{m'R 5՚LZ@ʳO-HRpۥ!ʔ/=gT զ/y%Qf8")#f" (Pc,%',DA9MQ>"qb4fF9ZHΔt%W,9A]h:V1p6'ȷa{V6ɽ9r3]tD-vY>b#CYPAX|ZK)2(O5V'Aro U>Zo#;޾CT+67p 2 _bԛ%m<~cz7Zpm:2,8k e4H A'*"87_&GLYV$@Py+) N#,BH0V9tA2ςR-8\)7mr?N^Qg+вVi@b2"Q)3w1\N@ Ny!qN?;o穷*OAhtXPD2V`끂CBe-fk/RzS 㸄6rbʢr 9}1\;C6 O NhTU蠭L.@ߖgٻHS~sH)7JQZZ'$E Hf",!ݧ4Rlm(GMS 05'TQԉ-$4FׁfRn] z%=?~CNsZQ2&b֥l 8aXyA> ')~Hcۏw"70FQOனa򦲹T9*O 56F Xf5:PBKke syY (" MDH<a U[xmh3aww73i;Kű>՟{}~_"O6 ߊzO#I*ׂr)u0+{8xCc+KPcJMNheVT[+jOkE%g"$J.'D&$c07M#INe^ *<*~LϱPԖ eV1'7 j"Q gK3x߻P^{nzQ(Y0ɖlۢe+X6&ҍu St -8s-ۀcQX!qH$x!L7JR*fC+@oRs Yd[.(&U1,KXC8,kVQF$R)cplʐ@՟tL3ԾII8 QQRLi89csJcq_^慪;S܆,H{Xig705Yu]^>^v #slXbK$B}J$T!QR)'%KЀdHɬI2deEF822J%Q3uBݎ!R@a*h0n qN/ smrV]㭓ȥ^z jm@ZP Pcr# 1*T|i&}H*PCFID A1`&(!'t RFP횾n R'3c/G4 sDrĖ#x@?4AGB (*xBzRR"fdP3$Di1HMJSQK@&Ԥi*BjtSl>:%勶ah[ŵ99rVu1x@y1֑h&p"0#-__~'?dY~8Xش7!\UgVIkv(jNpy? ,=n^7rɫ^1IcE{xV<LJzW;Ŭ8n 5Hnu実I*J_*p]zH Z:xTBZ)!Byn?yfOKǵBC-`AOu5okG.(XīYw(evC^&]{T*C؊1qf%9WU~kDb0~=v"4Z$3i#,e֎A D/1!K2ruջ >yF e>z9'=T.fQo߼|fǻaaPZ5 !*Ldp,Rx*-3 z*<9i4T ۧ^\+e۸bD궵^4ڄfٖ8Vg:*NGշW֭hmO jtpj(O5\H(v⩽aSPs}NQ0~Q/ #U鸑J(L?_Lq)0K%me߳{ Ny%M\UeKGḾDFXL&i9h!M[~ {KǽBwhcvCSx;A/\@vBf}׹ 38D0_Hii[?{Ƒ MyE&H,9IE*"e6Opx8$EERöęfOMwuuUuW.?.FwtfE X>eز8]6z6^yelL!x]5}ux~Yu Afٵcי^Rsfš7]nM11ه"(%t=(NnطҞЙ!JU\LgrpހW hB!t QٯЖzUvU1UQU1U1U/Sa=n$1I3*AqσV8!䓲@٨ d|gF JhU BQN8e%#1(n^J.GeImgNŸ˭QMZ*ڏmW++rb^UUbZcO!ʹJc&NYwUm UoQNdU5Eo†=_Y%i<Ȓzpe,2Xpy@ѱAk}6r~{_s4}̼>-<[x*ioM~V| N(}[Ob`RB0 w $r K R(I!FcJ`Ѧ˸5Eӕy^3`!&"p#kƋ)@p_4^G,ʷ&]8yq?ʥcjH f%cRx8Dyr=K.F UcBgeT!2$E, M5FΚ^*!0JH$d&PdT:2LV#K$c#sQ`J׏JP014q1IImpý-G``>aF'n%v`tXTnJ({ݍiqIh&H*~,L7%,( pR.CpHheh־␏ZӶgp3u(E i@xgD<"SAd֔+@ )Sֆ,wqɃnf>`9 h@i#Lyypΰ ufsJ=%hQG"LGɝA?pVL9PL%-WrCo0B2A;]ş g72$ {A*lHDSR xkL(0bwgf &viB6©j*F6d\ esŜ>ylD/ y}l;ݤ#JF?^cd)ټZ{-e=Y9GU 2P"HmI$- zt]Ơy$$*ZѤ&$7:'bD DH%0AhTZٞy0FZ)~ax}t/`9l(fnyAׂ9kiuF=wM21EN!x:dBS(>W8D؉Dȼ\q+Z`P. ;L1nB׿ːIIh9q A>5R @+KeyJPkVlp`=E.[c|dghd] 8@%:pf4p QJiHo= KM񼀫F >q1"iJx !\4(< r'tb|1^?(4(Ekq1 Y\h>rw(puw^ޏ1۪g/kϳ_G7Mrɓ߽盛;[.=8M~yz__N|@qK38Atҿ9{~y wɟ)_t=ΌcDIrà;2? ?戀n&t~??7y9bjr?E㽑ER~ K~̺;ϓ.)9aT2@hb|ozo7 E,5c|2afƗ.p8ʝnBϯؿsuӿ4/q%zB-"z)híc9)_ljY&ק'?C~\W7ZZ̄j*!΋ۋpܢvsIɚ2v]M"ߌa $K6Md,+MjBx^К;\Xr]zgIn-"&dzؠS=)ǴC\{.c,GRfokcnu))rMQ-Ab7\|D)|55Gm[;`q*v6x E[#'.+o73J,竆;&. kj'َ+:6a[7M̠"]qɃƮef83LD3g+-<#i0.0΂I(QpOx>!=kQ?8$":k +<ϗ^l jQ{v\T 2#Ǽr"E&B9Чtal)jM9W:Yە>p/꺄4{nQAe_/)>'ւ 1{^tG3p&1n{8I:Ӌ:^\7/{y-I e:Lǂ*"o-SIu5k!P{3(*3EfD N)K L> G ^{#D ) ;T);qŃ:@_7R0mh[:#rutq.AEp@"VRo$o~|9픇p8-r!NevFvXG8qH}o%F%*E #63$زr{,YVC2JWjq<"q #2hR+cg*)oՓWfͩ7'D1ՃF|a*_q0j-ُzĕy2tS8~>gQE|W궋S= "W?!݂TޱcyWX ^{ SF9>rV pس(6uA&ۏ/JI"@y(ԜؤJQ#hG VɕXLg*nqWH:2Ŭ+X%Mt B*r܎K3!t4o.dɷgՖlKs,͹Jw͑Ule!p!pд 3MkZk7l$G]L!p E [26vUF6yVU~+Ra41 >HiIb.’ > (Mj&eM LYG'O4Y>al걓 |LXŹ`6*/pR ($.q滆*ֵ[ioOam" 5)=M/oW7sZyocX.lܾYhokf3֢MEfvY={8KnԹS'(k0 i*{9kw ͽlͰ]Cy5O5` Q`R<$ˀIOXu *΄&x$9rv>p=,()s"5ac0¥b^34_׷U ů$b>?g<"9Oyl]gw^ѓs<[T: F##WF4S\#qBdI2h0!Ip*%hc[D}Ei !7!!Puі*Ԁd1@xTm^kbBaE-֜Պ g9eE=hK3䍋mQ5N /Ed 7;}iQ&GM"1"W5%54Éh/ʫwԂ.HfN%n @dGWƄ.Ec 53㈳ O,QFH) #uL3II8 QQRFe,֜ՖV)f IƦP` wN';%]edbPN..vx= S߾PnLLw1I%`DBI?% *N)%h@LD$p"#glr^:oGMLX6GN%)l5g^q1jM.lj2NZ#RzE+ 9+iy ^sC%(BX Ea*IzȈ Q(XfB6 ?p2N5hWLXsVƩ\͆q_,b1",".xtE4"HMhC]U:(!%%/%!!J A NmR"F,e;FLI3T(l5g|S.r^g1).v..̓C*@Cȣp35boQ*PNDxHh..=,&CFqy_mw`¦5Y#W>cysv C#E?r+HLj-W¶A8k u4(6h8qw!avGwX;feLl&S a7FPf1,)) NpBQI*N`=HYPQ>piԚ:a{c\ո-MђXkΚQSk7(j[o3wsK:!ӑȷ4fҌF{64<ndN{δu\M4舮 tltTJL.f)=(DkjBp)gd?Cj,ۓ,SO/q ֒ aZT{CX!5RY?\u/ $k /h y.}BM+ Qi' 4* $}G̉xZQ$\197SFJI(Ej-zO&_9( _!E<(RYEdcq:IGMSM`xBE,oM9ZG#BLZ0iu)g3; s*W?ZN7[ LU/~址3U  $Z4.|/ee^YCn[wtasi8f潹<%ļq">T-cbs3>L(Ag oF _3spC1tm=~6Q@Jd> _:t촹Nmi"5i(U󷾋ǤMp]]ƐG Gqy%zdtqy2Q#r{N[qz ?kVX۲7]p8Q%(զ[}]5٦}yjt>,8)OOstz9;oǖۛ AQt5j:Ǵ귛X.tfP&޷uz*t.QoN_݋WN)3/|#0i5Av P[4^֤TXߤjn:U:]^P17C80][C՗cgr䌱9ȇ-7&zŝ *NF5hw;*hZ׻2޴*\ƄEg w}>m4U~mIT"$K 2e*t_sWI "{pzabF{~M=5Uhk%H/3i ɅJ@:4 N Au atu/ALdb g1M([Ʀ;{3=T^s9g:gX1Io<-2_r3pa ƏaUV j>Y"盨T@pub2Z3nCvő)M}ڄ8۞Lٛ%tGNj QG!"tJTģP%E'Z܂y}xw^=j:RU76CgHЫF^^8>#㪺mwES^so a%H>M*Xa]_ڟ9˂feXLHϸ[`B,+oz$Ii]<2f/1@46v4/5?7Υk7|wp(:OP[5X ~|9'Vv͕7?nZ-v ٮf&I9P/)Ir-J45ԭL7xYYXd0v7zVm?jY\g'?7&wRd&O"wFWް5ϰ7Uuʫfw?Mo"W(uj0m*:֟:9d k3͘2;Lgw? gT#IE=W{%$9etv6h [[l^GǘjFמtNNPyt0T1^u(z;Ǎ4ᣱ&A omTĠMJFE+=Ɨ\95O T_l[8lrvcrJo=3g|s93]yHt"XU3cx-wՙVyD zhH Q "UgrYN+J#QD1vRD+J%f˦ 2Υ :$I)IPDRBHZx,15Ǵ҄'VkjHt vii!Jhu֠%k}2 ? @N JIqMo` V[Fr9RޣcbmV %8K9BS#Q37Q3IEBMCX :}_aN6efD8^fW=^/\͛F.mڰ>B^_ Kh /Ct|Ʒp+ "S/ͦ@|7'/v+ՅO(yY7yb?Ts;3oպE?>i&LKEpxMkUXxZ5r '}GNs']흓nI7s$4dRM ZNG+3ܣ";GpBF/SHzʜey)c6RnTl謏m;1 d4/E}r7$? 0ӟ;g/1U }rh0U|9L-Srz5 W nL ތy67NW_v]j-lJ s{WAEacVQQn,ӻ>lQB}9J64V^ǓǻMhZ 2 H% a/9s&׹DW&^tZu3_w5 a/?vrۓٜ% N>,tsdlW2,rE*(4$idZcTh$ >gJ:246ΒVB'.Y|f[ꜝ,VxO )z I1û o=pDk"S5s3U5v䐖jwipr`;CBe3Lkpq_47F yER{AkϪ^ooORio%JݩB؈`*NM.;rHK!QڔW9m?_tʉFYsM#@__V1'cT7.u% p\j3̨;3v[h[)eRUhZ>3qv& `?Yy!ZcRͤе`A^8h'e ?_s g{t|ÃLZDF=g\29Tiap^<lt@r4W\^X"%)C$ 8, 2Z8XsVLE.N>a%r GVK`"=z+b`ɘs܃i\8}O"u2׋@It ($k`Q$p3CCR^I5CzH'׶yt7 ?""1]is7+֎q*l٤٤bîK!5E2F|VjT|UOC>'Xp,,"{dDҜ >db(hrNJղTy$൯q6WH[B7W:2BZ2lԇuiʂzuF\iv$;Lj#X??tD0mu&v!)ijާ.Tj٥ iKc^ãI2 V:voBD4U`EJ_k τ;n_p&ҳ< !%q:4:20@%" "X0tB \P$eb< -MxÙTC FIq{ *z`rv\Rj<ٜdL)Ix}܆B;7ST-:xlXx0Ydq'f >pLY$Й6` KŗWOK3̎S=C/ y4!I1$-C, Pe:Y< O}utxA#㷀,JIO˘t+^7́' i#F@X׷{莶GL_3yb*!8ahQ7a;ْ^1%~>=].͍ó7/~=\lߋ_'([峻/?6~?}?}zRãͯϕ LѓwaZ^HYW}saxF;O2McoX=(<N_=Y\Ҧ#1],y'A]~%?N/,|ӧf\׵Cѥ~u$g86̞5Ǫo-o䈠zO>6pK?q G5/LyѼޝh>^y"_=XӧO𜣗:`ܱQsKÒgQLSO{x~B<\yz?󏰑e9Fgh x{>f'z;ݒꢦdG8\m'Qގb5l$qWmmJS GѷPZ]P:P&A&:nɅ$MQ(lȶY&C.\4iĦEOw&ܻLtcyٓp78H~чjǯFsSEO_AYZ&Z%N=Pm]y'ݪf0/j}~= z0@;Ugۡߪ1z&Ƽ3mɢ]Qt p&nv5 mXN:m-C_Ql1a9i͂ 6`p2gt2rb&ix v =H_GRKr-|s hM$cg>sN"x'N.B1ze`sz^Y1U Qu)qΔv65S1龘nb@wHdբ\X^̞exlK[ľum!TY@T9*)B'H7"n!8ɭ~Jr~\VZ}l9ڠdjBɚ6F\jD1G-_E̐q Z)˓$39c@@BHfN{2Jn7W^Th]S{ăi9עӦs)hx p, ȉR$}* +409tm m%9۶/"qYI2^H -r#YSJ GO(z~ݔD\1 ` \ͬ,K <}Ziek5zPi%Cԑ'm5iƩ#f-Y] +s'|t){X@UJ obsGiOVIN@ j u.#eOH$[Iɰ_*pO$+\s҉ё]ҸXБD".{BݝL`oVY4pq "q0RcbF͇ŌV,G8ա߽ f !IPa ۧ0= djXީm./] lW']8.vi)ܒ2P2Lp%cvpd\?ILg'8p%iMf AHӍWkM[Ni@pU 뷫y5b CmzO{cߟwOj$zddR%A0%{ɦ{wz\jk*ԯ6?W q޹wZ.yth0|J{ި_±^nTӚi?lnb|Lo^>|_7?>{ {?>]4 &3 ݍ}.u?޵amuM-vZ֋ߥ_+o gC]+[kmg}]zקl$_K˯'MMMTp*U$bI R :Ƶ:V߹z/CO*e "=Vzڡ# A~DgrRd1p2ӓZ78,!}4:y4\5)TYiAoH%AgJY#@bc)#ݹN+:{ښUx3껇ڻcmOwg/}^\wuG28LG\ƜR,F\u6|z2UW!WH sUJ"qqa=ػV_1Dy`H2RL^MڹN2xQWOd,8Ƶ4Sw(+ \o;Đ<()6YWKbcw;QD=%mۢމĩ$rkY~_OOF46-xHۄT<Fk+ؤ#t2WZ~3z-d}:^OE2$%6o<$^ 6{jzUcqG}HPTX ű_ f'2пBp =JQ%Eü;H61a,T%m ʐJb (ZK]C8. åC%XOK"Rk" H%1 7{,ռ(]RygjQcFEJj Ң 1 bX+m jedlJۃ.&@7}mr'ہ'0Cǐ%aﳣA }z:[a l~pTKC+27W~ {::שjG.;\/ݏZNKm y?*ŔFa rQP&]2B 8GdbIv5>pUC9=yA`2'ܜY!"R@ԈZ7mѶ8v`FX-`DX1 @0 q/ QEI<Qf7ZΊ z+ƫItYŜ#용++͕M\Iڨ.B*LrVOfPc w1x]tzzXhͤ1]nǮ^bBȺl}iW-ﭙO^bo0::,vu7ij|UCi -wy W֬ibi냄]or|O9c]*InJ|PV\Hs&SFT*b+AtuA8 ]P#H"Wu}p &uka[]bb8o%zQl=(SA-m}ICJf/hl)r6\)!w_joyٯW;>ŊeNz}hK7}短ڄ*VLZ3ĴU$j,ݛ>Ol=L>^%^ G#Y)*skʅ` ԋ֑ݎ#Kܑ%z#KÑ60r{D),Q*#Ua5GQ8p bmԑ6ޛu Eh@^2/vNkX-̓]>aq12,HT\X +*ptF \`IXX'mǞTN4CRA( Q 549xuU5swaN1Sβ:%G)뼆=hCT"W1KnbW^,1XI,T"88OP,&d8v3|%gy1q& 5㞕/jM!Ɍ]iJ[jO.yt9+ \&h)QF3DsyD+˙Ӗ2Wx[^V ? M*xi1z,(h騤 s54t>)}?bH k{zl=dG[OsM杏ܓ)5p&z0(\- 'PK19tD%-  0;J+LJwvDUzrS26" B/ Wqq㠶=VL֠6z!gZ[TI-bcD 1.v /dm6 tHwQGcW' cՕ-ߖRXI𬻻LBR`p p1g Qƚ@A+'N:2gQ +(F؊FH\qZŰŠ/P#P4 #}J3LRIA=2M1n$Ÿgm5b 9Y͒^yRFUYKL02~ ޼m=mtSgSzNӭR~N&dJ H[8>e?&St-o T,5m~:afKA1,Ga*_zUʽ`RMső/&k7ŏֈO" ?/~Oux)jpv\r«_v)-jBKFhq>{rF炒 uz=n$+nXJ4J+f@4)MucZe)ބ~/>h!ZƄ&gbs9x\RX=wyg=jb w%+&9NxQ >^I8;8m?onju*LE06SuQIk`E%QwQ*lWE"9%3 s@_1IL&(+=zZzɞ\B9"35-b!T Ӕqvw]MzZ%,⣓5Y0/ޙqdk LE@pox. $#|Il՛1ErHJs(Rꦚfflڞsj[,lk~~ #7d6nCK'8jŅ#h7IVjAyEBȌF֟HA.ݞr3Ֆl* fth#=v{Ve1B+-l[ymUZ%!DzB~V=祛VI"~HeuwYiPРH3hI(KY_pÜL-Nc!«}oTv~eWe#o;vOc`J6 {,@_榇8i7fc*ZW;N\.bYdnm+RFJU7ʙ:g>pm 5<4> 94Zl tuHsf/F$Z0 өS{X ;CS!=k<=?IevU"<$k/JˣM#7G<f\"9b߬Q]Jg9ٷE=;1 &$ͷ&LuEC3g~rSó 6v[F[%y}]0[2]K }~nH-=k4U)4Y&܉&PBSh:& ,qָvGbJNO.V} =n9SN9ߌc%!̳p:O˝ګY֏nqaTSʂPMrƹ.Jc 'Q *Jᝏ1Tpߡ`Y~ree%td<,;k&>mEҟM(Wq_pjm;P nlŢ<=Ğ~tUKĸ'W⪞~oz*UFWlաE#\` {wSsB:ઃ3W!\1\7+T{q*KP Vd ˆ  @-%B\uWH͍GBV{+kWWRJOBWBR\uW2&|]`7X[+TXq*M]uWJ*GB֟  P-UmdatI\IkͱTp ZZv\mm8B* cCulh(ZIbvN`\Quxp(3RY8<>Ke#k<(p_lK-!aΓ8V ɷ̨U=3[O8юz*hg*\RIɉGISO$ՓkW)Mq* 3+l7B+, ێ+Tٲ 8gR+W([_pjeGWm';\J<•õ7B+Tlq*  $j1rW Zz\In `q^ڕ`ێ+˽qZ%ێ+T\ĕRY@0 PDvTi懲2p  μʕ<<Wo7CRYɭ]K.^v_b^M yx9U+J|[~a[ ן rP[\3,,8ƢAbѠZ-nѠͺݢb-C'XzP=Kk'ڧ[S%ml<+puhSivsBy+k/ U2pA\1E kI=͵5՞jzJCxUqŕƧfBx+1A-'B-;)J(COYQM֟eD+TXq*y]uWRSŸGB\\ @>q5U0"R2pQ 2o\Vw*K+>PϘ'z*ێ+Tf;+#-++m)mXˊTHCW |꫔ɥZzg*&]?3ܭjuF*Z$ҝǝ[M9 -uK-VnKj(WyEC5TiY=Xl@VIk~ꉤzr>-5'RiHˎ$c5p-zj%#\`%=͵RBtq* Jpw֓{ jڎ+TeUqŭ'c e v\J.JXkOB\\1j-m `'qJx+, W(_pjE]RJy= + P Zv\JCtUqea'c fQ ժֻQ$5-F!dmVhoabaBTCP]VT))JUq,#SIhPfz vU0t,^P![pZ-zz\!OzjՉTk\ZjeGKoprWVѶ UjpA\1UOT *oFW֐ Uڀ.3W  P.վ ֏Pe‫J0%W X PfS+[BZ\uW"|2Bʽ\Zڎ+T\ĕL+Al?+K]ZzW]ĕ~ӍYŕP'YȀrWV UjpA\n% [1r5W֊pKh$H ھxy DqժWH+0>`e7'Z.ʶGbы'TU-s?zjTʖ E \C #W X|E=R+T[+TiUUq$z+l7P_pjm;P%WhFW 2FW({3B+TiUq #\`7B\+Pk;@wE\IEՎW(W{c ZKڎ+P  ?++/B+TidUq6'W; (W_pjj;Peێ &2Pʛ ;j3jj"1yܟ ;rHP]QMP-F >A02oP.x28`iXfdɼ_1$h5h 5i2j|͓qXc,[;ubᄐ}mm}?&D}I与u?""LH* ,b?P_KhF 3UW &I^o&1^եy?x9h4$pߗ7 t/-.Oaz5D)6j4zvc[NO|ǛUvRjJ(ZcxMypqѻ$Z_g([GViGYi_F<ޝխyV<ܾ߽eٰ?9?;cZp',IdBam&XIn8uNZŬIX <41)bsxYι_pr ?7OIz}6k1 }hqIuRfH3N hr`rTFq𔚄2bo)r>bؕe^fБ~;rM*|X>C Y$|ooޜj<G?-DBdtu.WK|-.0ZVWPUe !Igl7?NgΠGwr6͇\4 l0ؚPXzCW%.HD>Qn>"8A\1w$4qĘqK9qVJ:ՙfTLHqBjp0(.Hd 1S$ڿDBN"kxH5 EU$B zʄaDkPgJ:.:RIYK`hL`GIݱ},9㋇h[q4+oܓl6LOp(.!&"/ZX"M#AHL %I)4lqEO S#:J8ID-G1ƝI W<%J9'2jDl9pRrEy!]4>}mxwriѻi}}?#W/ ^\N7>Kg2\ *țr:IN΂?FV /۟_K71//_dA@hUn\^Qx8זRׯJzQ@v=]1+\{w&Wңey`O_B/&ZRIn]ϡ`q k:x5Izw#l?j1k-o>g\?߄o!4՗ѻ{vLS|#)ÑԲɼOFۀm Φ 5F/j tZ6iR b;,b!T~!(.R{,"2$k ؽBm:MBG[6SڈKi͊+6Ob\vQ4!BTjYE 6%B)J-<urBr"IRzIPD-JFcBbkô҄bw4!l"aP|rcgtz^z:bdysLq-:\u i+?֑::V]{Nf7iR̔,6TlAޮ Qx9si -lf\`;Ϭ/+ Q$eDЌBJcL7%((-:;0҅U0u4Uˣ_=l. W**RXMeP2kNNWO U>Z oAvd<&x&M+Ok9;g,7/chN)yE2a\-_KL5^8 dTþaVk1 cn:x U|>(QY!ZnkOyhJ͉f?qV lP,E[͋Rɫ'N Vg;O;)#vR7E;[fiiʛ˗| N0-Sdg2PF,7;.yR𠢷XkWA ^z#0;pYh[ y\,MQ>ș܏9K%Δ-Gk(M0>䴉*&ͷZ 𮎨:Hr).@HqBdI2* `$I^I ZzH'w;$hɥ:W>fl-[yc' U7IWq|gOq <2j "2@MH$x!L`LxgXJ:@bJ|ESd3h9QK+,Y/ qW ^oZjo~$i®h-˒вDŽؾF. i!eʊ }MmHPjK hj#w$5(9у;"ccPnEKyҀP$'!s%BP*iC! OAr).L];? _fZ>罥Behui !7!!Px-#88U(@egIO 4($i^kbBaA-FzAcX \6g>bk8yUɸ[]-yrD΍ ,rm&`K?-QEzZ,I-hLH*ҘpSh{fqdɕ0cp086e!g6H hRNCTTqG.*^3*ta1VºPpUѦ7/4zظ&5w;5]ӻ\ _Ʀ:W$ֻ$0f"!ǤОUH4r e4X&"&3< \lG%YV\頄W"jfNhcU) ڔNWq<];ں֭nʄNZ#RzEH, h9+iy ^sC%(BX ՇI3 Q@b4$(u~P,e$jЮp1rƨ\*ΊP4b1V#ѴՈ8]!R&q!*AX@ V@L됒Aː 6)M#@B&i*Aֈ 39Z,%E[X/V/zqeUABZDd{-Ph&p"0GB;[x(wl)=YA)p? #z_1kPA\*Q/iKb_Qr]q"hQP[md )pkjyBy剶o!tnjcVxjI8vl. FRS8!($cI'$,(U(4LjMeDJ0Axzs.AE`j|ԖhI, {aTLD"L^LEqKi 2ˮҌG{ˇO1M!Mdwv'Mg&&htDӁ P6:*s&r3 ON Ny@ӊ_#s3ԡs3quMi"$OAhô( 'Bk4% (.}>UATyl  0J:VXu#O38)ט'lo 1hyv"@2JL.@-ښ6$\19R)#$zo"ZZ'Dd "TVq,2.bkGpgy|!TQ"H P;/v,2)7?jYQUOϿ4Pv̩gho.:qb`}n=g0zMVH$՟]̍0 QM= ;zjǴꍜXtfw'~uv=9~}w<̜?]OC`2ۙFځGC6ZRCx KzJNyɸ /uc^矪nx׍] Bڻ16~D" (Wt*hC Tبb; | 9Cc4&d'уxuI9|4IhAdp;T$h@ ? +)E䡵0Qyu#=_pڹJmEPLbӈqB% uRG#ńbs":\JS1*iz i[etcxKcxL:,Um˄I{ZM:0)3f?Gn^,^1= D"_a96y MdKe>uyXq)W^̔^CiJ1RNjֹ< ,mPSR$*-Z<=2(32(׋p4{qwӉPuzy9t![- r\UEnt~VՏonﶻdc{ZI~%M&ui0z;]92zy)f Bʚ2ewBzsneO~PK gTq$ WS Ad+!攵o[6Q#g1赧9  :*!_.19n %u4qZ0)xk"Vz#0/9k&Q?Ib]56l0|w6rcuqĵH_۫r`j-Jݿvsl #-1X-'v"sw!_3a7r~vVR;-;բ\4)‘1wq=̻ݠvTЯE+ XpQK'U5j]^ G|g5IWxa[ n N=ۂSfQ"U3m,箔`( Z";)XeJInD9$J")!d`$POA115aZi+l#gP }ݖNirp}&%g/ɿѶxl-Ԭ\kM/0'IԦ70Ն ^+-n%-itUJ7&)oP(T551#2jb2I0Z(oW.97Q^gFmvM]xe'r%i^&Nv:ziH )c[غn]Z;X͚L4~j6b[N o}}GLP்5{=څp*ɌSarQ5c1=@9!6VIٻ:rW.ndX$ؗC dwƲHN!]jm]UC)6ɪ֔wJ9ŢEX@gB% Pi_nA^Ô?/&8~p=]{wy#LOkye76˟}|lWW|wJA]-C_9w`\Ol}-[s[~&nLlT^̈rlsmM 6Do9F8-ɶ$$MiOHh Ftթh M'2Đ GV^"G [ĴOLSWKZqxybW[ds֯>ھ3n^t~Ԏ?}j7Ej'_N m(#딞\uf0:8ʼn0u8}?nqzq]{]I R11EBYq!Knj\쐐*[YtA"js*AiU U(L^yN*d)0Lm Zjl@M3A@R5U2ƦBv28w|C{bNN>|!AK᛬Nf_n+ohz;Yoٷ+HjӜ\-Wgob}*bBH薓iF]hiq[&M]Hp!#I{lE1Ov28;Z׷Vugޱ)᫗~u-׏MYNQ}9E,SԻ-rWVSֆ6_|>QBvԖ@eH>8!N">dvx[yK; oVlIUɹju:2(eX!R-!4iKvgBSYWIrEbV꽲:;,ܮR70WF3Б~M(l%n[RyJiI*,qڅϼd<>j^6UJ$hm^q(`͇JN$D-'k$V?Z ӓUy:Ķf&J\ T]!l.`'(-Q>د wnǰ$&_T ԍe)XUf*qPGBjc;cguUgמkܠ.<|@ь}=з7عe~[Gf3<GA?gDW0Uk\誣tQn =]=DWLgCW s+5zUG*ϑ\pbԌ)NF]u:J]=G6ִ廋ã>F-}qK:pd%1ZͱıqYr޼>WGnkC2tz~> ?cĘz[>y \`=`P󻋓˵\滁4Uۃwﰣ~X1/'7~K|K]omuFnQŪ;:ywTb^u@~XkX|֐7~` zVc$jj~ :]Ӓg l>ZZhi-J%-_HƓggo/w*9Z6c,g~Z˟80xy]Y\_,~^1ǖktyٮ'6:kfwY/&7O# X}rv?_$ SOL~hV%؋$TT{kȌ U+j.ttk{6tM芴2B3+fW\誣;ڞ=]}2*͈ؑ ]uf6ꪣWWHWnz,]up ](u(3+KJk=#%̆:\s+%`GIgHWBdw`:BW@+dw:Jt ʑ< ,f>tZ ]un:ʰjt)]]uv6o;ZꪣvOWϑDbYRsageaxUȫ4l{mX9O/7 ok3gGq枼]z1|(=g75h[eo6pVaH6Ql2(?b著#k M Ƙ4p 2XЋV21G3ٯ;c1rCqV௕6 Yz hk 'C<,2 a606j343hcs|:"2j6k;ZC~ܝ%t[T^*enui?/qxtܽN5?O[/gh _¼ l(v6]풫ӫ^$NjwWWez;Gv?'&@.Zi@*z1^|y:yWW^cޯxpU>÷vk%`˚`ĽXf7#)o0ncTvsqQܡ> ;z~@naV웛牻)o  bY.u#{Bľ![MVJZR`1*U,ZSu)b ;jOЮN9⬖ױW|˂q6p9 RA7)otNN"bB[\k_*!9.1bQFY.T(C¢9U?`lmɠV;a7#vPr.s0obSI-גEkEeJX+,Pk(T2#h (5mZѱL 6K$hѵބ`zw I5fKYK"#Yr\ ښ%e`Xj!'aZK-3chX34fYƤLEǚ3gU}h9Ũ'G@Ţ5UIZFAH)vh6lse:E)5 F[ 8>劁WYKk޵mJV"vpK+Ј *Im<{S/Nbכ1k֠-v ` ,RS␥ A?&b@ZVq m9`$eBH5bA_2,$dN1Jd$8TQ'-LxVE3I5%Ux-6JZsc4`XXF*Ht=nB F \.%醸05/~ E2U0c`e ,آ V**T(:]-ph-0QSA7x(QjPyU+IJJ<̃d.OF%´[M 9#.%l ղѡ ƺ8VПJ/3#+pCPfѰk8C*2, + ѻZ8Z*cXlC?l1T+R5bg c 4"Q6u, |Fũxtͤ_3!JP|1yXLFj! Vћ dW0Vw$ˆ͐jPoȻ|ͩ ~XW(c)HhPO @H( ,*&" 4nl3QBޚ9$(X,xD&u0t|GP!!.A U\zC&À:칈 8X[ !2д@@y AP{si*3Y(@pseĸQj` 'YV`H :W* v #cٕ$b.k TuAd"F_\$9-!x^>՟2$$GqF#4<*- ))FDYC` "s۝B@:/Y+c囯yByg-fe/&إ**@12T5!pB_fI`;wvٙ`GqCy={7*hFJ5-/.0As^C4Z$ >:F*:p8(6@OБJFo'Iҳ wSxA7:$_<)X v!HjIR| 2LĴˆeHakHM}"yL\tDh5w<om m*>XdunR% 9UIUS UUɁ`də0)qԫ&ow#6W7GUE|ڷ%J> ]{ #w|-Y襲u#o(s%ko';b\1DvM˜N;ɭA׆;9!mhpWwCX{dڐ@aFCmZ0^t%=heih]HbJ7 %dM(d,iJմ jȭA0"PnSamlʪЪstiʄXtBFRAClyt!Ųp㚺DoFihMM78Z]h!H\B"T9!TCcV/r٭nVfv+ VT)ƪE+m#w^>T*mʩVF~~{}pXrRl7-Vu}qꬫ<=o/mlCqҴuB72W8^ZopHxG/puV5]is޽Q#:^|*ol?< ,ޗ'(c_fa-;e;DB¸t<6<|k\(-َ?tpc0 7q@hw9Q"(pEq= 7 8 @񑮅8#'3#Np9ֈH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qM  DY29 h8O ʘ 4G'sqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8B=''m8 ڨ'e3f 'N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'|@h99"b"Fg9(J@LN@stф8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@3rZu>.^-i)^nP nfwWL Y۠92Ǹ\k@6=2mqiƥO^ǧ\prlp"W }(6G29"e$W_`#WIqpZjr6:2+\nf#WfQ:/r5CY;2 M2HN]2\QB`UPI1j'\&"0y"ʤDf(W'0\p3ܤ"קQ_\GmT*>S"4v[gnݪrT}<[^%~X6Q۽r=Ǣ%*>vwhGW;޵z]4%kY{F7 Ƌkc5/|3t^nl _h{yLz49_6eEmSUhsע \*6֦D _XP]\Ga!#W(~J"Wzz6 D`#Wus+5nrEVjreYH]f#W".WD ZBf$W:+5\KS+YO\9kd3dW+\m2S+PEf(Wކ8%ΚESקQ&#r5C 6kQlpm"WD;AQ"WE"&)p\n`#WD IT*QA'$hhQ)WyaVcھߜo椾_m_ \M)>4OhAy} "׫ W/P$_9MKxXF+ {t֢+6g݀r]."u햡3dbTa WP r.6'լGH>K!+97HE*:ӚY9zW)ҷٸ?wH%?1\ײZGLv]jWNGM8S6=~ޙe4OFCjT>5\ coG^DiKQ=E,,rتAg>r\5"Zk.WD 'p`"ܱz8ƩQ$r5C! gc:E6L^2yʕCkN ~` ܬ2"Zo.WDdjr ǡ Ml+yrʤ *Ġa$W$"ڬ.WJjrcʞUY)FW+.rE&M]I28GJ>gw`G~vGvpyv hI 18[>>:e$Z>~e99Y9NKlp=5D'?;ARUS*G,0`Gq)=ֻqH src^'p0\\h\ev"W3+M\8*>rE\H E#W6 #"lp"WD Fyʕ8EW "\cN?"ʉ="DUʹ\8j>E6L~(DWs|Ȝ9p"WD' eN"W3bv\؞#?/rmRL]T,g؉q1`a'Pŭrŝ*a7석9:Úgɲ9cH%6g Тu4QFC&6ŠM*L=!J#[_NFXP'0_<ؤopyxI2NlX+-rتƨ\P .$Fr`$ \sO >]2\Q)p\؏F(&"Wљkz\0dI+5+2>GJ.i*td#Wk2"Z.WD)S+԰BŭBq\L| 18L%&^+=T4Z/f4zssfىayl \oG޷<֏dFu3ʈ\=5*~lu`#WGvrE!\PLPGKM_g#Wk#"Z.WD $+{E6]UNjrBtY3+Gw'yrE1\P|T>sJ""&mvrE R\)v| ׳+Mj'Df(WJ䊀e#Wup+jrE.\PRώ˹ʕܤO<T`7SUsJ(AwB$<lrr Ã>IiF;6 Y"0xL?nj&'Oܫ\8>0 QH\/G_g50\3BhHv &!Wv\YVF߷\8*Fh.rEc=ue"W3+Ӗ\pHlpc"WN]$ʕI)N>&E@+Ajr啱GF7%.rZ7(*Td$W '$\&"0vL2>GZ 9|+\m(2"W3e]3jrE SRM@E_}1; Fi$g2IL%Z;t2~~9{`ջCs7 8#Z<\ ȕz|nsAħb=jF'4[b:?^^6^Wacݞo֯_Zz'1޾)hָcG &O.u>/\ >3Yf\ cNjU,n39YC)E;,[\usu:W/vԷ=5z^OvJt pWԯOZ\=wFORrѥc;`~'~n?bL&оj._[/\Bׇ[h?Wg8퓾v ai׋>??micGc?lXtQZ/ c5c\Xk>%6%юu0 ʚ|d$WEN~&(\9nVwue+|m~}7+~Vź~6]oԐ_kB4dQ{6K'.Ѻ0u%J)6䰳TW% tN- Eo jqq֔WnIm\Q(W!{{뫝^~٢w+'MX"<iR䶮y߾W?m8mߡn??_V?r^.q殏͛ocr~_a@%u7Wo]í_O,E{/>?1ا>&5놶}Zn{~WOQulbYQ)+CW5::6t]Otl9OA/EWn4@n}ϊPh޵6q+ٿʧVF£Rk7N[ݸ/I\.<%^Q$/-߷1|H@`z0F#2!O+scc I2& O5Zks cOlY}bq^2i0@z~hOƨCvlΐgl}s~9́E?7ؓfnQ.Wv|6DGnNU]0\1=oލ>ZgqNCa K!9(-5oޝ6{_< qnS˫yj2yDVRsʹ* ݼA|CۨṼ^&u%Xe>2۾MHYgG1ft2`pih6T h~SԊ\ ɵ̪8w2uD/=AXe":{bmt3]IкN:1'::O'M_q ;7|IѭhX* 'T0zxxgO?l 4V/чgŃ~@k#uVE~̹s\f2_3G>:35D3Jyysj^ BN)zE\sj0$2d` HRP2 &N5dªd B`:Nϻ0;~[zW]9x׿w~VO<+oq}ЎcΤ1/ 5fRևur4ywzVurEQ~d Co :WW*)9bTQ 4{o/?U̎}meymGՋ4M 7;O;lhzM^~e^tG¡SeS`N>@/&r:32IӬe'9W$4EJǗÓ?ѫAXH2@xj?n,z.ˬ0,Jtݜ/dKX\l&x kEVFеA=F/ӟW@J ˱/QR:)߫%3ќݥQ"LiZ.޳LiiٞE!H4uuJo u.ߓ`8^M`??-g|[n|tOMKmV aNkL=Q]7Ud׽gl`zæNa,Dvdhבiz==j6(SN$N;`Ӊ%2h)7>q$ԧVh2E$//"д\Ђx3B㊍#@T"*5LȜe^Z4`<VS8Ԃ^i a<ֺJ/KCd$w?] !(vZ^#=a%(at\`54rgN锨ăCx m^轻1i}N}~(ӂ5bC 6ۢ/EzO N*?fZty!ه0T%z$z(׺2Λdҷm_ppB!O%Kdq`DI0&-q2(CԂ$2%9 >{rcOC zDei"'b,m] -OPtgmݕ W2F{p\CwE~;D/wńߙ*jPvh(Xj#Fx\.it*Ř-ll+y`ǣ`zߜ{gAiO8gD"@6'*&5i' utQ\UTXT&pKT[s(=Mv8Nw(uI 4S'thIq؝GRke At;hOhcz&ijb (w[ %QM5I,LGCy/d7ıbZ+~yG#]RD\CLyŘϸ5e7@aoN%Z@2>"p,heD\k҉"6bl6xcQVޣŽc@2ӓ?]7VCtw5&ukܸiISSw?yD͋``$X iL@2АSoSFSӖXwɖ0RBrX*5ޛ2kk 8w)U1H*TMR8W)4cW,PXJ/33^Z}rNeYՍ?)P^o׭_8bSw#ِwȘ$D 4*t҂4*Y&Ek!;{608˰ɥ *oGI L#G2.ٌn .桠v1UaV-j >7VB8 f=zmϢ0"JBƘH4!Ht|.I<ш(Ě,2%G(<`,E l9blƩx(XL?vED]u-"ާ蒀Ў"HE(]UFtX!G=S>% Aϐ}(SQ3.8!(D=z K #b1qHBvU;z'!uӒ]qEb&g'ԢGPjoP R0A{NG&y>ʪ>pPv숇i}=@؆z/>rh೘a7^k(QkjG?L#Q/&I@)A[c*h+2VFiQEZ\@q"!(~Ch+Ƭ;tjM8nH$5kBh8ZJ)BB 4h#J*E`y)N< 2FG# ƃ seL  io1qRD$W-ʒt}] duKi7XN>LMc~xkY ;y۸QQ (tLTku6d-@'+]{oG*Q`sv x[O1E2$e[Y_ _8x X8z*c$Jt]+CS/:~P(2ӛ,3O/qW+B=X'ʪj n+7uѾHOqrӚhshRY +#1G=rCm1MHV )kBg7J(nޓWN2/ 5+HXd\bkCa[G-:J>q4drԎ47 ׍EϿ{\iSp]ub`+zy+qX)#j4qֿύ09{[ '՞KB y?!O,#xWܪĞ}F9&S@ޔ! ;ssqlGxǣ1*5_-I||B(43r5NHǟUۖE7S*f}b(lыJLf gI&:==_N?*90"(wi+Vpi^̀1\^mB[qJgs(Uqwōw.y{ ~<_L>x#KKE=vaYոJfۍg@!zH tjz b~p,fKu=ٹ׽2~KuջJ"\{HX |W*N =Eޜr xS<ӫ3@vꗿ9{W/7/Ξ|sF>{go^=G8]q Lϻ w Vq\6C&z2ی+|.=ʡmwŏWwQ{3#̭&zŝ "Ld_i;O6U9U2T*D#XuV2BmNzX#D'&/Oʕ΂N":~P LjnBb|2rܷ~Co8LU^YIpڹ2FDK\?X^hb\d> kZ1heH\sQ_ %1W3݃f+[}Ow/G/(aevZR7w= //pȍc QA?: ƙ}F "nO\4%Ӛ`+u$ :B] |(H$DȈq4CctLBD4@!OVK"eb2BPƹIBM*$ <#:j4ĜS)OaH9!m-'H־x9B7_'[F8O #F>}rsF{~ 90~.=safK鸉hMk$-ik:&&ZHYR˷$@DuIۉtR,p&6)֦Sx*O>3" ]O]7aNZF}>Vf {=:F2O㭆غez}8цNgm6Yu-Kl9ٴ}Nƛ;Z=?rf>]5mx~d7s'ո\B{DXsӦF-vz0o>% K幵cFyD˞EBW`۱s`s\ 7/M:KD[w]ַA')8&N9 Q;EKd(AP(Lp==)ShOQƞ"TļI1*#ܡ [K X42+DUe&P+5Yh.P2L1.5.tx+`$SU"g@Z/~=x̮>n_,<^[5U{ 3^u^\U0u$"iGr&|_CoaDotm/ LPR >7┸(pdhJg)eh-<ȲL\0tQ"˴w*!T#.G>d%=˥b@s0Ч &h!dF2 9qT"BZi6FΖCg}9?]eF;9~4VKdBMW ӂrmGQ  YNB{%MdAH)k.UP[M 40b hf(ř@AL PFU:edZtscl$?0gXFއ'崼I2ET.v~K>視!RSys !&J&@,X4N}=Rqy9/ks,G 8HBQ9Oi?f8g"(^Xh&'Dxē|76HO^C}$&zNG4H6FFN6A&C$^xG_0JDdV%Squ.S6{g /ds/=9p^e@c2r(dMd,uL`QѢx*s BzR "}$V(m[9Tތ ?".Vڄ0$U@^̇rՖ8L:%iSЁ82$^Zx`ARlrHJ(2ΒVHƆNi"G/xCpdK,͠v;q0DOH&䓹)*v7E27E"Q_0瑨\߫tLP6$UxSkP=K)1\)yeއ(eP-8e5ھ:]dEmg~ԩCg qF!e5W}3:Iak%JrWw[7 I|yg9µf0[J29wL?f?WS˓W|TZjY:8*983NfERY z SEq/3-0F3v]p6xY v=5w)X`ۭg', Nvgz>^iߋ7κFaAVU+6OV}Xt;iPtrN+7FR&6Uz]~7⯃6(:4DZXZF OL3eb9̩M#MvZ8A< :m **dGI=JT$upW9g}譪&}kbl NEV9&mj$HLT$,8~;Xf 6-: O g l3zeonRr`Eԓq3T?7:S pntR֍hI(/uz5DFq=!ѬO]ri/DNt ;@LW~ ^ xyPXmRtǜ( 7 On6Nc>?Ȅ눮q'ADJ9HtC* gje+ߕ]6gl;98 Q|oE4 i"7PUOє6o l#=]jݬ܀zH<"c֨_WJ5L[߭:h-P(gaZE_]M?:VU+QHr".X07x+ϑ xC?2|\7\9UK_^coELn%dw?ZxU/L[ޫrWJG,|6{t5cz6Y07 5mZiL7BoTQ\.h8 "$J{bLDQL{p)ڪ90c@*"1P&e*B@i2@xIL6K z "8oڗ[75tyCCRK~YSObTݓ·E9)ob>44ߩ}̔{MSb)SBi+tӑm/DSneH~r4+\ A c\:1NpͩHft@@H0 ڬȺYC ;nǭW"[ryVI 0<>\ O 9Yg4D;ƴu6rb +)WQ%m$@\:)o5^4B U sЏt[L2CP_QE9l`CG6o#}{.V,5\~![VljrO-3+ ->D3ϲW|ys`Go狷^_T)#_g_wdz*c~ytveAwujt?|weg]ތOqOolnϿR`|519'1um25!{3x듷ujMSOO0WR~tlaӥi 3Ӄ?zHN'׭?1FG~-) sW_ze#`:PiIXa2T\ȗOy%z~F%8'(Gt]" rFa<eO^ hӕWOFZԔz.fhqr=tEMJ%kʈ*q#o'nG $KT-iR;jޚBxt'BXrSyfd5C!g g?FQVbZ~|"f$ESHC54˗)P[zBHj%ϗ[Es2F |w5IU|Gh421#P#/LBe·BQeΗЬ7㙋m7z 4XmW Lur&juW0hcDG$ӽkovTxƙs$$"=Cz_##i^dUÌ*McHxEE* vHR/"r-|g lˌcA:vHX"&Rb qc{g>QzT< V^ځ. ƎүtԕXp]7F&CMƒUpRoMvKؕUOn%\9䞓Jc3deE(+e9XL>Ќ-k2c-mnO5doJQ# fL Z%UJaippa%xDrc]jO鱴ᔱR!UA䡒#W.alyD9d"']+[en'$9aKiG/6*b1J9Q5r6ecda7% ۞ 5:42!Ņi% aBz{92>XAoC)It8&l0N4 ]+z!AD0, iSţ׊~r0-ppn6VnEvw7y_;nj6lw{蘻'MBDxpekVۜ{.bZFc~ tj2yCK+x\t֪`s}Z|vtoլ_u }^jks NG[.ؖEX%jST΍*եzewě\8w^zwN-0PNQ]ـ. kv^a`R>J^ Nݴ滕1Յݼ S~ӼټOrwRmn5tӒAgv9\`|zn̮ANIa }~9VZK\8nƷoS~JFVn鯶)tg6twɩ6"wPR^iGjdpq@SJx->cmnWUq&X}{f:iőbC(} l NEI-&堒/^c!,Ҿ"!m^ilFrHWMCk.0\@mY7X{7MkmT9 ސ`hn)1H4GekenYtS#kg(=B1' A6:E0>Jc[4Ƚ 3Lq<ƴ2q?:t˴AQpA5C_}m8y='!5P=Tf,U фiӄk0!*9"#`,~m +5B,ʨ)r-σpr~mq8[]F)9 Nl`"-\q$SZA!6pPCGMu,?а,o'pl$N%CC$ =` chx68iw^ y]sخ[^G*#R"%1H9RafXTqXG$w#:6lqޅ^ t[ͬA$*8_LEb)vU )a8pk{7Nj Ql#aZbl0F(9ڂ1h+j-Z`DD5GFjA$3Эp/!XlBp 6qР~s I'5HG= A%ؿ*?kӀ gv@$9&J=PΆ!aI+O%f @x|YINsF)ݒkr`jr,+&k(|wEXOge3Cu>\gf fqz^!NWK9eqLa@}t9_n|yhiMthΊK8B pƄlaLUlkpS^yQx2_V+g7 )1WN/v;.˹z6sjWqy~vizc8BTyUOWMݐnx7`DzBFK$t{zG>;WmWndSMcә%zy$ua?tѓȯ%O)>Yߐcsx:v:K؎{g?x}_}q Lų.޼x`%BI L v F|j%jk*N״QWN!7}l gk@g&=;7.z-Ss6| L2W=TUDQCl4E7h%/HFvo)Ur'BQ'G`@1 @wAh"gqP3h'}mC-{>P̬XVS?[K;{uҐLuixaO ,?wǓ[&oZU| ǽqh$#'j]n= }C9 +k2]F(֢)1rK%bdF/cL!նdl%c{Jk,-Bq{SƓ7(oF#W]=eth%6Nik|!#)QsJ# 1˃ Gٻ6r%W>ΰUEr \\<,ae'doQecCn[긁ĉ%uHV:Mtz $`ӦPqkReS{483v:;-)桰hֱ+kYͬ=+؇,>Deĉz'˜5k3V6b@eLߟ7Q{4CVeU8RM賨p-Iuuq%,yIuNˆǮGfD?3̈Yb@@+?Qb9MUR5 8ܚP.qqe8PB:Ϥ*UTTfHFNϑq48OxTubX9!Jv02/g^|a$s6K?*\㣾Pϡl bK+3// f;!qpug+PzUZ|_\/31WmTiz"y~kC?G_ %ٌ^M'^b>9I2D4wi4 y+wWn? ;F[˧&@f!h*i-C6#"qIr#E |5 pVdir!+J(hJkz,8O(yˇ?pǧ4sϝ7~xo9˟i1mZkҷVǧ~^az[\i)bf' l6ꍥWϾץJChO('Bpm MR8RD3H ъ-2 xBm9 !!d%UbYh"jeU 8YSjfk~k~p5ͷ^`ug8"]ls}Su0\RLtE^++r-꯱lQ:4S <P,:E# \%[l fىGUת(iMڞ^^.syݺB!=8`?gijf u($*7v,ZFL5$Tg . .kV]tM Eɦ?rMC,א(@.`Z{Ðu8$saǂ +uz~vro\pJi3gGBR/-URyJɦ:4G[~l9$O^K/3ɩ/HZj]G&|)BBq%*ѪdIhx{Bj";Nc[_3P&ߌM_2ѻc@Xb.#T|K;Ui@]XN-KNj};ZF0_҃7_?[V րvҘ~$V/mjL]&?_D "Kl2&5vE)=ViJT,p|sz'N|4@/fz/]= -]=%aѕxkt") ޘ/ZH;wOfUx'6hzôneٵud yǗ߷|;>᛽`U^]/71q]YoՀOnU]I]Uo>oj3u]WyaUןذcIe7f4FJEEBs;z0,[0 G a706ϞnjL5&n8^=|{_^kY":|>|fP,"B\6ȾUoKd(@R &UHV 7$Coy7 qY}U("+뻃~:^hpxQQ%Auv15\-B6&@&gRl8o$hp@CP8xܘߗ|@bblbJsج6,@͙"1܁6h 4QK!W)j&95G:c3 \Z9nEKAO2&ZWkU-C,TNF (5@l)'q e+ZZ߷s 9}T%6[j9V\c`l\2M_ujL A}RM˪R`KA(: Q@Y5f,AƦ2eS}h9h'@xĢAG}-[ ũAېH]69Gc Z!S cG 8>^6ּk94ɉY8~4n.Z{GF(p0v$/s+` UܗokiPfEa)-%UјϞ]iKEEWFAEyX}Vܘ`bklUsET eXPVd+K4j+[@kK0&X%+#A3(Jk"4Ti/'Ub|E%_cqY1j0 wmY~z?Lv`cl/ z͵DjEJ3sIbQ֣m #>]ustzxV+dWVb@,AnTzCZq2F̷6Q!j ȄكHcoME"3|Wukʃ AwŘA cG !.A f|)*)ԙ5TDšGfұB8*VZ hM e*3[!(@Hq;A(x3hAQ 2GBY'_& xH\F*^eFTU]%ޗT*gd0[QYjkIk Nh8(l,@E@H v/UTy V"5[Ƞ 0(4DFiŌUD1MDr|T1&PT1yQHa8IB1M`߻Y;iLѕw4٥ɼ_U-Qv`c{ڦG1*76BАf%SdIW}HV*U2P (yCs ~7XQ|AΈ ʃVs I"9d^U0P>xmBV5 eay{'*APP#HA@8d!T? y E*Ɲ(@6T H VE7|vrקi2di"YrOXVXktdDPHc ѣ.O9 AJ.2oWHztIgu $$]LEQ{XRr,a3m>%TB;뒄 Xut kR([|Pm!A5MH=XflqRe9QbզX5Zl?vXI3,I£j4ZIP*HxޢZM/-zrDu,e4 %0lӨ%e Si"UK mry7X~{y']nyv>MۆsU&ɒϸj0=IFOB=V`&]ӿc'QEEmC^k Q$I"eCkBm1&POZ~72#I*vàDy آ!9)lz@'#UwW[^:(Jj]d*2(eFAjЁF-HOOdPJw64Q1l]` ?v5+Bq&SurSp, wHN_* Q 2ݡPʢ#6:XLݝA l?Mp=S|>e::],0޶~z}j;:eXc#WDcrE#syssqEv@˳:P?e@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N?X+6#Bp1"n"zN Rb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v\'!0&'1Q & dh;@B%:l:@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; r@Dr:9hh}{'PFN<@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; NtiK)-5^.oۋ ?]Zպ2.`AAF[5ȸ#0.el\z ƥu=Mh3"`'hFhF(c`zte4AZ7"^m{u(^ ]h]n,tA}+B)# +c`;+uz4sW@w"НIW#^Ӌ]eӴهe  =q>Cvzq8s2juMw5&\g::ri`~rvW{Wk?h<7?|ׇ.M5uek^ cE/Ø&SW> ۱O~'P:k9aggׇWBaԳGt R祫U40[1t@W]/fFDW$ =Z>#6>Pnˡ+%aDt$x p2c+U;]J^ ]iS1H] ] BW@w"R3]@2aGDW؆QW6}+2]@2F3&"") vB% +-JWNi쮰j4w֙+_"]y4#+j]L6ǮWD?o +zqm}Q'lܚb}B>i6m3Z.zkJE)),_o M}EΝIu1վmS_2?[ 1 /Tj=]\uu_"MlhH"^|4SiR0)|j[/T9' ElzpCKkZ-S`PS矫O]oVwFrom{Bl|DOUDR޹ٖ?N X Vh/g'ۡuJoM/7WqAGx$ 0oRˆWe T|KQ头TUz٤÷3zч|ҍJ?^ݚ$׸UXQb&6eb7/2$% ]1dwo#]7hȣ~+w1&.i[\ЏwmpSMq4>p{7ZoT'W4y6W4T4oHɟZY=wI@VP8exwp1]&p8>ۇrw9zǟ}>HWJHnVCh!U>@moQ/NwKۺsWW\}QsK@>FM}E Nq\cYBsQeeDz^ܪ5cIǷ3y#hŒ ¿}tTSuA˾WYЬ 6A%30GtWxPK'TB/ Wzĥ {e'Ώv}Эb3/V|N]tFdE. S=%챵(ڴ#}i c\XvO gq1cNQŽBIS^OwI5[u5$rsٖ$KE;5r6(LIDՠ3m:XLN!m6e.G(`p9ٱaӽÆuö ɤtH.ha{<]SܜCqxhs{5zoAgZ{ȑ_emlfv3؝`2GXY,z٦m9AYf)6$"FE^&I .ItTgBQu2X'Ay!bXg!D)ĥ31hLAG* Ĺo;e9$eQHĀRJd:bh I>:0)c('\-.wJTJ>w,.(Ig4jǝt{kP )`.H_R*`t &{_($pjtlKJx|ig|K~5wGqօ,!#lstϴ8YN[xǾ)'R}B2ߺOK=#@TOz <@sGcB Z0yFEL1d#ZFTw4V0z XxRo`)^]bm=<3fX.OqFP^oɵޜ7cx>}n@Yn Y? ¿P`R6-lkێv%fd!gBZ+rݠ ~.wZi(WH J@e2MmĖFt(x9UƫP?.d)؀PS4DyN}v8+Mb X; 9ivǨdr@2zC-@rR!:Q"u;?=+/&Ξ.&1}KMw VKkU>i Cn$C;Ƞ ZTQAmƩB H6?KXG%(+9AkML(٭p#I RT)cP5:xqr/8E}w0\GM$oZVw:>ގBh:vJ߾RvʻW.HfN%n @dGWƄ.Ec 53㈳ O,QF$:)C#uLֳ UQ*KhJ[b춌J1[Xld¶PuQmBlˌ7hnֹ.vfLa6n@A:-69$1IJ 0B"R)'%KЀj(jI`7d"#glr^:!#&&,R@0)lb0.QCF}.lug;m['-ӑK)"gdm@ZP P1a!Fכ0h2"CT5 (F[ŀ8 RFTv=,&n{Xyl"E4E,]!R BBDJP@*xBzRR"X@n Rpj1`)}0(D.`B@&̈́NS-b1q=|j|F}-lmg;uUΪ x{[%ȼ׏)/ tcCF=!#ea[ߝ#EbuM3ȝ ~Mp{'#~|GAZ:hTF09^Pi*g*aTVYyD ZҾ/LPHҾ6cXRR9jN <P/4h#UN'z̳KݦٔH6Փl.qҹSr 4ԁ R(9y ''AsZZ< wNX U\Ns\ݍi Ml4Y0U L+`ZKІ I*H єjpp&!]23SBˠ^Y cp~q@SHslo 1hy8v"@2JL.@%Y:$K`cr*;SFJI(Ej-'D f"vTVXd\ 6H(.=0>Iz!В'y.a:FaWmWNsI[)!։ljk0; $)C%v4&G:oMpWG#ت7(NJ Bꀯ֋k qvH?nIE0ϴT}\G.B>s]mH^ž}3D_.˗*.ӣW)lr^GZ6_\qJ&'P9XPfNɇoq BNBI,¯@=@[MO7Iy܈}%}U%yCk\wuMha1 O_i0rXzŖYYq?l_AQ/i/URTkEMA q? i u,q+c+GZEwIqFti&IyD w8CEJX~^O^qO(r'OzjU|=DÁoj*aA3N# tIi4I Aw:Eu0JS1qh)\-㾫;Fαr bCKvrN=f=Y>G gUs(¥s|z{804 2`&fPxj8tz@TDS\ F[E>P&$c07M#INe^E݅ I!J,I@yr*D@M$*\"8{'ޅj<}:E"™x<jVmݫEzh+w9;kʴsޓo<}O1TrZF55W 6B%֑ ICYټR*C+82dzIWW>>{q&kňe~315#eϩxR$8T0+M9_wk|HL,8S*]%MI(F"f,&4PvR९O|Od3#vQu; _uBnm: Pq.UZ!ec KϟKWMcw;*]ّūf]!<n[5_]yݣ楒C^\ggu׊z器]e^Х޲/u14?/bjS3$~ϳ.,=ύ$K՘bk#H/<%L_N%`SWLltx*xui>ZڜxPiaVJP"YG p]yH qţYv@;䀀,; <ɔ0AшD2qڀk%!gUy`nGٷ(]e8LaZmN+jw^Ln*3U hILRҸ@\y$]H+ɁQ*)+~|:"ٚ$idZa>4g3%CT{%' [آo@v,a?=ިKv K)OV?4Eez;E],5Eݏt+'Z9tԩuuǶCA' 9"@QT֨$ R9z 1cوT^F:6AIRsP>q 2$M1k%2$86ё% \kߺk fTLĜ2H m%f yQLO@˴FUyjhTn,WmÔ18>k#F(L%K,$ܙLbV; ;?d;yښ5X j݁{r w_JuR2mJ5: ДUnEDzMqXacV'#<K7z9!K}%+t@.s(R"= $>eøYd@Ч୍aH"T&w}^Y#fɎ ?uOw$M9$cIȽwe:*ܻ8ў#A= hS޳kmS{*7^޳P)- {* h\_eDUXڳj1hx{s M^3v/\{ >{= tC\etݹpO q=s-?:*Kɪ2)*3+0dk0*jK>Œ*WD2UHfq`#74K:K 6DoDRֻ?|Ǔ4!meaU|zwuD܏]^q!ѤJR9_yLdImZJ>Y4ح[oWFn~ZВ1*2Z朂a+;&( <3R.*C7_G]Ya܏iz2Z]v%6iJcxVAB.I:d2ZMPARϵגV#D#^qZŰ`'@8dd<-$hɿLXyA FI!iիo= }kF~%ޭwZ[>HwwYMQi1AӪI=ImF.2;Oŀe m&δ P gN+pέIbzVV<5ͣQ@A$BHI-9!Xɲ` e (Ò7BZtKqYs4(U;bc6dVx;Q*a.Y!8]D[M HXM2QPzQT7;8mp>5Ku("nA'&]W7kFIcUJ)bpYL49faL_p{!9hDa0s CadZnZp֒iQ]/YrWtVwG:ehVc8LK%Eyj8ySvW>bx!y)7*2>Uq緿~ݧ|w}Az)c~n;\>yz]iL78=yvo=_wӵxzo:?//|s#˱*>_~|uI$N9:̅.?.L_y["?$˝u?ޚY"nXx1\]O!9;(@hzr1bz.κ_-)yjFs7=o A1Z4~{W%cvUД Gy5`6O_^B3n\N񖖚%Kn[9N/$^_tԋy$ v'KV+i-9z{m@;κ+:kn(YSG.m'QnFV.k{ΰ]Nb!۪%tEFe!,yUKd5SxHn:߼bw هIÝ]-TW wLӴ劉n$L`|x=moɸwtbxϘI }rhN.9^+z]6<N= 51zYYX3g: \ڐZFrG<[_8O/ڧ-ퟁ}.b|a6DL e,ω;OR މ v")\F [W+i5rlVpy6XpSr;qN-zhjpn(^,=e9+nP)B0ƔR-ZӔRDK*jK>RFlԕQ6F]rj*ZuꊨL+T_]u[У{Ϭ@%.1MqD_EUQK,0ۘ&FGn2Oǘ9N1;OYŐX.ynk\ԓM:q;r+8L!L9_׮Y]?ν:j)AؤXp>Hck saњ ),ru")e8<ͨ܊ <^9g ~,X fY(sauᛵF4df7ʳ{}̋Թђ?/s]zfz.͞ ;ߞxוO:8Z^vZl*pKi]w0.ȠaksKOQM%YfжfOFk!YfRvlUNVaBgL ƻ -0o*-5j(;C_Pj; ۩P̾hY:zgOm40[&V3íu18uDOګP[TVfP(cK $uFhx*阄:O=Ѳ}UFΚaMeM<7wYJ"A `1==mrwX% ×S 8Y:䄢0:2m$ę'^e&Y`.+U)\ZDy`s3 )h:BiMr"H1"LuR9Jdv+,$,1E`T5!0\*RwYq$Zaم-{zgn+aDjI:*ŀ+^ x5j- .Dm QoE\}ox_z<\鐻F=pϴ=96DE!XƔ(ݔZ{uJT[ڝz{4"ƳT9i|OkVP9*ǫ!W@@* %|A RO*6c5>GjBY7 hv-Yg3#;+<$󐥖4ۀhf>~U,VXfʴ4OC2~vq(͓[Ţ ׎bQϧg6(nWu99o#M.i:&ޒO%qDߣM (/l1hM2e(ɢ4& /CI;鈔 "l"-AgKJHX*t'GxRl}'W JAJ0Hu*aag3Xc!XxX>2-i$[w#RY\l M&G+Glik)b4eUR.zα SXQ90zdͬE@.P ZUh#!$|m;|Q3x,TcLMqbb jw6;nڶcԶ=j v׌xe6Ɛ!ɜAV[&lvdi©KS<,\b$g$̐UMf2֤,"\D&QA`4ID6 ] w&~y[gQϫReRN0班]H#bײ???;QA #5?{M[qotwjӣy^M?y8Q%ӷT9` =^?[7o+ ߝOJ>0q ONc^qT|>o< '#=֍ލ@W'kVĔǓx(֕|2]zvp^WmϊKșU{^Hq2WTO<9F "ωZ46/.씧ox߽O}^ʽ}/߾~:X+n՝EnxͻUM]>ro/v[ݠ>r[vkk@Z|wp(ݲclw=fV񰒮 4HجiA\6JR9/S !9.kfc촑/&oNbɰBPI~/dv2A Աᰂ՞?pڵzo!U_K9. Xr&` 4).FL% t3C blNZ1[d˸Qk,ZՇ][m:S$ϝ>>ԧR3 HdIֽI-K?yW!T)3 ױS% dє-(Y˞{PT&oHZf)u+qO.좭7иߒD:ϧ8۟i"{ \Kwq##hl^Ϭ #f0(Q&@(tAPCT# &.CS"|0JfV<ɜ͙Dj}2IH&h O`@_A,wC}IcB/ @AwNJ",Zu&~J) })>of<3X3'wv(H,h"SG>yo9M;9E(vk) WEOsZh}-ƆY<V TO!C 0.դ$T*X4FuA<ʗ'@ytm.=M/W7]K!w6w mB%Bs#bo;E}zw{w<ٞF97ݣ6w6|*g͛5|y+J-_s|̶;o2oz.e{.}wDW4Rns{X=Uߥݣai-6,ۜY]K}[W*`'T@P]j_s³gt*~O^C1~fL^M`./Vw֪.|'q08ݤU<~[SnRjOs֋V2% (GVZ8k'ƋmgF?/-WTSUfܣZk4tyukyBA^˟Wrb.G_EϗW~m[:X"g~>;|/ ހ4 x6.&ӵgɇa=ճ]u,NL/ۥ$̺:*eGV6h15`sjjxYjV&ͽO|}xp>Ew@>Ew>EIĶFҊ5*iu&V&)'1W-A8 d,$M B(9.]WJ-p,>ŕ˻mT][ӅѾͧc~H~??z w[ "*1kn}1ɇg1٪2Es6 X"ԟ}8"ũYNۆ &1h {"{?DV8UUAd2"ĆfU. eBH!AcLfEETVʝY0,h5Ptb6 Q:K2 1LnO)l:r?0rnX)^l?Qׯ:gx몙Ul[Lc379Ʀ+lc*umL0b4MbC!+X 1="c W|8LePJn"n./FBRʃTµ󬩧z?Nߏ~Dc}hO'8⧉tR#T (ajڰ;˦lq8?e~ IoWFY:}m KrtO/)PIɤZ,Sʮd2Qy&AF`:@X##]_?ҏѐvG.vya ' +PsAbHJ@!k.ZD6<) ϑPK#½ BiC qH*1$dTK+6 e @V[Ktޤ&cJ#fo+X_3xzZ;'.Q͝੝y`kp! G1X8& dNB G,=t<4гzzڐhKIAbAtpYhY E:^Dxk=tuĖ'K=mGF)%I )4fۆiIО19XA+ m(Ck}gZѪ^[t P>um iz7#W^y|aY+qkC6sRa2ޓ6lWZrgh%z\efxѕޝ(%/@@w/;xituW:kYǚsM)q(n z2@̱_7~p~u`;(B:xDJ2HY'$.AzfAb!(T[ 28;W_Io-q=fQ-*Pwiy|0$yŰzGgEt+rG|6o7^gO[,|^>\0@z/V .ߵz~i}N|ԌnG ٻv'oW~'zLU8b۝;+ wdnϷ>M@ǎ?|=ZPo1:OtQ=n3 .UB˗i3 W=.`9rIWL88f֟{AoܟyT\b䫧 u%:zBX' 5Ec766&Oǭ=ܭ9mwi~5Zb'OΨz#Y w;dIQ\BDjA\FR6pM6ӾK-cm?,KG~',I1^[ :[snu) -p gp3yܘ޴<^vxq0p8G/>-f0&1l)Gȼ|Ohs$ i CQbEyҎa\Ձy~%flY -G ]ہ;yo1 Ù9R!i}E&j&(2..t(hLs7Sp,%1XF?U8W:љ0Y}.De_hExrF3`s&8k{Mf rcD,սX=A*Jv)[ b?es]G,u)s.~k]5=p2ݔRZ=DI?]%#b4fBw@UfTplLL`܅$&StTeol0cQՕ?19-TՒYa By^9+nRVfny|q cD5 T%O:wr[E@6o<혛%+V];͞ϜG 8?{F̂DdLpBLee03!`$* ֩a3\!hucns3mעl?>^-gK i<6P1(CFs5bv2ǜgZ0up`z06\ܯpc<-};NPul_VK wWqR{ ( 3\\́3*ΕB:zT*& WIA/\b ֚ƣJEHฮ!-qP̤Jc2T@S0Q@ h>t{ \|MZܜ5 R`mjz8UaV) `0cHs*D R{'2\d^dƠQ&>A"+¥{Lr̫fJ[eTF)yµDdNjF+Fl1p6<; pT,dF.FH6+_8$ugZ6M^& e2DW$=!Sa(Nbd3I9i"wI !!X $79唲J%&Y@N b!bqB`ec 5,42>JiBWxr mۘCf}Dy$ $*)%ҌfNȐI/` iU׈Sw`y~o0ڕw!Z͓MeדML%l<=3$&+&Y`R${lnC#BD|)ߚ!!W3ݏvpٿ;%M4-&Č_dӚMP|NsʼiFIFDo^swP6Qmvg@]n<{+3EŎ,X1v.HXe. 1Lq8ijA<ݥr1tC T8<-U_f.a0$>oWI+(eD{af&TJΙ[ay Xc"s-vGM~?RS(doWf_5n9f~{oȾᚰ \;<1h:Ʃ3M_ b X4 晁r&e4ԥTsE UqBԶNyB1<)42cS?lyK{.w OɨQe 4@eMNjE[È7z)j^3hԫ=];zgK uSyP]Y u6xl YΔBJ*eØኵc  u)ږuֹi G[Ӳfa3-l_f<"v*)\w Ҷd"bTW|:zyWVe}BorFR꯲dzR܏Ga. YhjiMdjd?R90yͩ|;)^kg~Lhy7<]`8}z̵NX[k;s@u;Y ?ݢ9?޾:x~ٓ| LӋO@V`mAAxb _Z񪡩b Muˌ+r͸ GM:7hv"[s`yG$C`@1 o!TxlŽi;[8T,/ͩ.:x*`Jj=Ϙ*-HqLo=7p$aep#zi^gGi 'v->@|mN< Ԫø(%Zt:7YH!(is>M3Sz\+:uKֹp1f44ӎ94,PFXpauC&~w1;HbS0ts.Wk\$)m/{ PHR͔on,+2zrs|yet@LPAe-(z`*Tkm$9 ?فgՏ8p%$>!%e.CCr4ëfzzjf\F3rgsajvnѾ7b?.mҐfQhby]?7&lxi|=_|Kz$rF2ƬɞK )T$RޓVfP=Rؔ !d1Lc+#v5q#6N'q jWEmS̀=4WN(RJ'> ")'C#-WI3N4g&mU"jN H70z6XtlE n811hX|@I)?[M:e-B"WX ƫmjȚ jP`Ki[.$~IkO~sd+ٟ  RVO2uh"4 $Kk9 0Z gɀ:5QFrC6G ksY=>̒5~*SHC y>9vKuW/w?}bh䝜VG'ǁ;R9u~vt}(gxD]UD(FX+ ('jR )H?%MQt"!O,ӔPDCTh{"sD,\ݸ4R Oe@uT,p?aa2qi9NWE>x3y8vc}_:Y5 @}^h%ΰ0%O_ofhNfBl 9dQD Yl9C9DOAaⒷyN)a0'LqK"8"fKuņtxaKAT qh,p5z5 u؎{!6{mn|!B.$ pnNj>i,[^d~M}|f!l:nzo|UJzjyx<ֿSI vsW KS` 㯛)F}4Vħq("]6/d}R-]%ryM{w~K5!EL oѱbS5%ZJD3Fn{d~2! mʲFДAFI<*rᕪEnK:Uo'@t(a\vJ̵ ϕ%.@dŇ5z]waIכ6IS@SkT}k6 Uk@|uz;ٲ>/mf'1 4WU 4kt4!ze=='R՘@ 6hTJCZ uDf1D:ڋ xm/ssv'׼!7Vg?nٔ QḟVY29>y撋ɛLh dyD lj}))ZyA: /Ʉ`5q+uh]\cS뮚ێm΁=ժM6QwmD=nl]̝-$^ b71dR*]Ef\#N"$3 «x5دO NV}O6FWv] FKi[?ybk|!d뵼ă)gr: u;IE̥ٗxkUם'"2-cUԮ̐cjv]poߓIG"?eg{x}ψ<='Od{:nكҿs?+<̀ڢgu$dk݉=8e(P/e«UNa: " iP.x|fO4y:?cWqM>LgwO3z=g)K㏳4q{ijv^iTiRH h%Ld.4EVbG |-xݒL݊ЁЁ2G!,Tѹ}jK.:˗y-[t~楉ņ?wګߵshw3?7X2<]8̗`-Q>|yQ&Qۇrߎh47̦˳j.7x§ՏŮ$/7ܪARljLn\ p L9wgrSC43s9ک^O:偸`6>a 3Fn ht%xt>lMT%gTn4ܷA>w(ˢ8ySނY !k.,`>\&5:$b\E0 2h` jEu lԎ켻"'ׂ0Ǩ@s9*i%^9-MZ3( <wvR~d ~2غȖ/s)lWIav9~>OBg0*3(d5 FkL2:M&>ר%"X^ `x`p dGb^FƃB'7I+- d0H^.U\'q@@\tf)vNL>\gzY[xnJ ^3OEYJf##ǔu Mi PJd^/ՐTlq鄞y6Q`,yPhIsĤi=dn!8ɲ` >P%'5VT q<^ 8.^9^ fQ s}IIrN[/W@ m|jZÂ=RVYwEΞ?jB Eew\l=]J|^5 :J$ø1>8<{'LbN)'[B&`Kc$0xWԳ| }!Z{\4Y4ӋI>" o_&+.@f)S\"9ybzsS|6Ft߶>aߎlaTwG`#dXOᮞ0ra~Yb?sӖ ׋6L?=-pq"Ej{t~q/[ϐt"ߔReRYlMGp#:9MBf1%C.{e<>K؝rU`@1-Jd*O8/{\ bxƤ'67^վ&mI;j7>x{Y0{ݣhn?\/rf`Jo OtWGJ&R?Q.s+يxGJYۃ[O:,/F>Ƣc,q&ׇۛg7JKo#1:da'JWn@xBi *i#՝7X;gw zA~K_oNqamϯc:ٻ6,W`T݇`8Ĉ !)"A)2nmU_U{_cQpdx ʋLcl><\7u>p^W5 0z{!M/Tvek|ANsLv˨e5ɹ-veQYvhYrC6ͮ:c0́AJU&z!UVyk 3ڪH4#u:p v0Q ooЁ&0;~>~*0Cǀ-Ydt Zkk<SbKz!$.^f0~]Jm^`MqXη.=W5x#.I^8fNƜƜ&Ҭa %a'xz$q 7\šBW -urHƖNaRV4\)BWV[W I-]] -]?b%*32}oNblIwJ1Sg*_1A PW?? ԀE_R)PbO.3GxwOE]:(}?0D 7cTgSQ͑\R\5hJ4l9'}L(qvOeד.0|U%B#U5Z*H~%'芴tkc* +>S Z{TwJ(iP-h]R4gHl/޺J()j2l]QFč+3vͪ)]%BtutŘPIL6\BW u+$B+δTAt3;]%6ZNW e-t,t%8!4`CW"Jh;]%N$D~mlW5&vhdu+@)p;3xtrd1trJhU PŬ-]]io-F->f#  L0Û^ үK Ybw|KcgmI gZr9Af%완|4Y eeWe[Ƣc\gKzVzm.< f1crA9+K|%eG^2^ȪVdpWj. &uD4sEY:Dcrw1?yM,/4fcB{s.oǣ[v="AU ŕkLW#$Vic ԋh-]XADWX"Jpn ]% םʥΖN辔 +̨h ]%1tJTwJ(jJDWW!Jp n ]%՝J.[:AbRr$g0)Ic*)tj]wNˆUj7ZZP{BƮNbf 5.?jh5;]JZW'IWRI͚J!CW .MXסUCtLKWCWkQ.[.ai 3^KxaGԈkڠ?RC .oBV>P*Ҏ'8~jM@ѕ4g:YYЪ/oؖ]ϺH+BX5떫GJhS/ DWPm`芵tkYٗ`-CWWbJhua5 tutEӔ6`ICW MXX閮NRL5ɺO~*դ)th=]%Ttut178e Jh;]%tut%0X6`CW lujTwJ:$Q+PsB .aM>ԞPvfJAk&YW Fnc2ZkOW %-]"]q- j̝*(il˭$EeRpJҠ *@H򶗛rA2,MaZJK{j%9"U^ɗ(kӇv~?hBP'7ތ4L;~ óM/Mb%ܘrMx2}G0R'ɔ.cLH31QV &K>}9/rqf8!٥~jngsH> 'a.7owI^TeT^!g˿hD✱U,zTX!x{f2n&e]R32s$\.%6!`B@g0jvea8z<[T#IBl7Kʦ/gw>>\m' \iG2N}_z1~DFE!ou%,`3sMHr-_y-WZbFU*…xC<Sb@"5`EE)ez7|gwV]?s:*)%QK2ZD򠸐;EPlͿL>ϒ {*hcY0f$|NR; ;,eh?iMߖmmssbp?UkmPrw/SxCMuM;Ӡ3^ } j0)*םŹLb,Ь"reqy" ]7dOEk)j8h2F˹ȕǻN<D#} orMj6QޅqraQS(MN wX|{fJ<0W5wlnU/\C:ax6XW9zE8Җ-#0룏0*&w1N r'|WIepZAfF;oۥjP;jt}=vw*`k FqfH x6HFp iRz8^s%X5*Ѯdl24VlL:+`I߹dLzx:A|UwƐ. `D0귡 7u}7?,5b[ol=olS^‹Y~حw6 nƣ%)I( osslpWMA*뭶ʯQ~~[~>~GΩyf |-mem0 ym-XZ;litڧ˟VQ,3 ,%& }klcl?ݥAncQ{bNlt;E8`}$ >H8Fd =!"}rLKI S"!&"0n,,xGI4 G=:G)[\umx:jGGmg_Xsr. aJ93"C=[+(eDf:*%LK̝hedqVVaa6Dy۷cw6M1?r^엓|e%@_unVf}7GoK7ʿ=m}h=X*ΉzlthJcvŃYgж.̱\dΌ7T4TƄ܍~rm,KUrS* y0ӑT0i0,b'M\. ʉ<2]j ~+ǿۋW/U|| L}y0 Z`\ط {Cm?bl_t9vYTm)W]>jԅCZ(:L>QyԊb6 /FMdt%}`_s"Y~J!@}A! l Mb+c<#Z$K٩IiW E P4 ZF+9žue|cСgdQS]ko#r+~J;~T A 80!%uIJk9=ÇH-_K5^Y4{f:J !hz~6X7qAI9JY#@I!{*ufi~wYjskXg,vzsVi0p11sܐ."[:WO>_oM /Z&၀mQ>OBy\w92}ܡtޡt^+2az-0) l2+!g#X/06&XsFpTub9<D+tx^lCN{[mgЇ͆͢jK&?) [듰+)4*4\\L%L:ςbL%*EtrCy k\1 lN# ZlՆs܂w⬯<{߻>f/:`#@`(Jd G%FԑL2u[Da=.ׅ0ښcFd%J[KlP6g&E͵+TmXm8=c=RVB]^GTm~WBv,,ߚ&F+ؼ'Ǭ$&621fM| ,*(SZ{A;IdVfPȞIrR) PC4E1Lc+{j9q2+^8k^^Gf% J*R ڼSA$d`h*i ƔGbD|VY鲿gbȄ x51E6h:0Of" D WqؒU͏s=m{xhD EN GN? #c| !D`ODX!1 MYCud2 7N $wYND.~# I'M%I 0NUՆsBD/Κqui,9/~~59N $dΦ\t1g*m BZx{x Xmv}? [Wa6O5$οaUNkF?{ƌp'JͲë=W&F L" Ȓ=z(Pu֡ A.z({^킓 1Iee8J! ,y ɡle^ޜhBA2X]Y`:87Nx嘒L 0hj 焗Khj UT2Z1Fd,a 9BnڇtRԷ@juUXH@S:IJ34%uIx6ѓ% \kXqڧULļ2:ZBDVL>"d3-gT)җvWC馍IHnƬK]dDY|bDԜro2aZ94-T{ 'CmTwoہ }1EյeUJwQz+JUȍ6L5Ħ|R+>)9J򿟷qy=k/Wv )"DspDy#ڣ9џDuh"4 $Kk9 0ZLjqϳd@o(# XYt !NbZ9:.ii7)x:QStfǥ˝Oi#98БR-:*r3]K"(4'`,('k~n4ݏɟLbH{8>Ti,- %ZM:ÜD_ q^ӠAl$ &WEhf#$R=^s9 z)٤_Ym.#]/Wӻ8ȃCM‡Ԇ4]^?{ `Z7ԺҺY>=H6hf!l*mnz~l|> =o<<JEBzSIzg[H˂%`.\Ep\WMO c$7ӇxLԖ6F-U/CrW V"ɢ6:6`RlJxY$wAJ=II's$vntV,QD9Mdty+ {r!^-%6r[ԟE8F&3G!UbYkKQ\[pN[.3AoWoz`5^ڼ$L߯5`> S"pxDu:o+s3qj h͍däB4 z5-^jud/dEǁ0! +d5&PB G"LV؀c:C GU ldE/>ö*brFY缙!7R?nٔz Qḟty,+y(OG䢺<7[M2< Vׅ> I An E}4Xm8Ǎx9]9ߘul`a,x1rA/r5QϨ0:!eo7ܤـfY3"y qv~f߳p_*%mLgZ{H_!LO=n H2 M,ig0JԒgA|Hz,6 JͮSU{nuU5/5\o޲>9O^2\XR[_X0Ѯͽ ;Y Hus܉ZŇ8)_^+ }nwkbf?w6n^z7wK{U獢[6xxF_DgX@/&'WW / [jõP$87[ٳfFO?e9zrPULMXM6%ql'n/Aw c߼g rZ?-⿴G#+oŌcڃ7bm ?ېa[8_>l{s3Cڼ`~O=x/fy7wdݲZ\{+a{ΰ-R}1qc7Y A\kEɗ#Ő|SA&-^gd EE=>@f(}gso@x+gg=ȾUAv"VeLbD1JQe%d*T*՜hHz[`n嶺zqyDM:=m|2-=}}M+Nϻ]|2 FB]Ɠk_hvZՏܘ? So5/݈z y3ʸ{$q| 'Kޞ.g=],?>y7<}OK+_ZhW_B^/:dVh u v;?B'pv[ {f銜sŒ0g쪃BWc>Q]$]/]1`3+kG 2ZONW2zܘX+5bڌsǂtHWHW[Ecj`b?v(Ѹg^f_fbu1WSP\lDiz<]/]RO6h{h67e2ad=b~Q1qS<޹v gBqF攦MU5%-48C콊E<6Sɟ0ͮguBSx`!6]c+R=g͆=}zٴJD9ڔdR;2B QFÀxoupG35h0Jf4!ƴՈ&1\;ٌv`eGЕg׻"/p_xоԶCI{{]#]=V ㆢ+kXѾrv?W+-tbDtŀ_|Ӧur,thޫ+-Hȣ:D"ᄰ#+Lf4tp_z:t(:ҕB1] GCWkhA2;]%XHWHWVzDtŀxj9Zg>/A9ѕ!)CWh h'#]]}Y{wovs/ JWg9L"-a8'7Fܷz UfgP-yoK(|䟯:]뒮?8/ri^*/w_ qʿu2ΧƲٖ >D+dl ++ 0X=l.83)e%lFZ+ $eʢrQXWcOpEK~FH=bfTY!)!d*ҡaӎW MJÚTuL0*YShxѱ]ˡcݯ{! V"vOpG Ј "#I4.UF-[@!X2hOBUy>q.p ٫tFVcm $GYۢE$ׂLQ 3OpWɇ9Hк1Dk09GXGWm<_GtH" 3G̐`MDD Teʦ JzŸzиuړ(.zцD v%d)1j0frYX.hA[V BBtX%RJEt)I6$2,ːVAl 0`%'DgB|/ւzhm5h;kFQ jJ"Z Td 7% Eɓc0mV:؜;:(B24.T+hTRTm/3#+pA׎!( kѰ2, + ѻZ(ZP+b*>/H1x7$؆vo70WKƊHT`6H*\ek=$ y/AQ* e JSᷜ2EHcYUD0+zXEk(PB]A ߁IBlTzCkNȸB#MABXz@$rHBB(2Mr HFD qTBޚr :U KuZ"4+ )VCZH8owaXs6#kE~,|:om~!<[gf<~١۠ݵb]"5 N3IJ$B &}ofخ1q`痫4Vuns*hږ ;[.0As^B4Z$t>FjAuP~LJ6! JF%I-G B˅نY<"Y8$_4(X va-!=%EHiDH2 ! /C4V GhXG0Arѡ^F=ԭ!2XhGWq6ܠJrB#+Qgjxf[M yY#K]nw|OY|wy\E4Χ,},!VX(;Kb R;ȋA$ "B.r!P-fH HLAR{ XRr(am.FdtBv,0 kU]BxdWnXbT3`cduJ<8#`AȒ,L\v`mLgP)ȂJ!B2 ~ȃ\ #wGzc5 E["P~P}MD1dD $c]:Cє ԚKd [@20?xڰJQA?z ǀSZ)kSGjRCu^8M|H$Y` 2tZҸȟe,ew5 |57Pѧ&r]IhU ZC{#z  uȌp-+!С_4= kߟVSq;9>d`8k)dS:(!e(ULzR [|* ׃9mfC U-Fӥ#D``ˮj@IǶ #Cd Œb`$5t^'QvE@z3ח`A#b.~:j뵸x<kRSJPVntVe <e7=ӳ/7u0,XB$J%|)V SkM?| pm>Q<u!E|1qX]WtmU)1fE!Xz7,x6Π|z|tą3ˏ=GX|)4N7{&X!:.k7/K7ԧ-vH]m]G]%@ ᧺dǂ+;* ("'%?=qfg 2{PJOB@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJUYΔAbY b"#U&5nK1.pJCkXu|6bKv{1 X߷6x>F7C_؇\L2C0#<2Qtu{pmZz>, KWŨ<(o/,wt~= 6:TJ2۱ RUɔ=9cwOwO}.ڽ"JOC2p1v&B>}(%r7zgV8owNW[J옮nj;Ci~ѕۂm[Wl l *t( :@9.3+lt6tp΅ PzΈ*#l r ]!ZDWHW :^ |&DnA@6<Еn%jVʆ.wUr QjKtuteiY'Keg ӕRM/ķ͝*f; xfEEe|TMMIdn!Z g|UU6"RIKo*kD>wd|u:Le$;T5e+:~%*zٓ KYERʪT҇ؔxBHhb x+}Nw pd4W\2Dk~QzZ)]-%MslBa3w`vO|6ƃw<%Xu^a#[=}+}9:O;:3Χ;I0Y^)6y\>ܛ< sxr'>*^qX??Fxge6uC$oRz/:xlz߇Ywn wOk|;9mgwi<ۓ0N |x8nٸ0Kv04Mޥ_ fU.B},UWm<-+AlSetʙTzt XV><]VNEt&*hG 7^\b2O٭>y6Gx1~̼2iW{:p{ s Dms uw}Tgv6w4.:w`!Kbɠ5,1.&l41;;g% 9fvB[CJ\%Q9_*/TRdb[.LM,^QA~珎Fa6.ꋓ @~–{g_F/,hF·s_Wצgʹ"A]K֊LWˏ8l m?/z^-J sp:"_/ ܘK6Ņj"1pCpQV B6A'+3qՍLObxÅqñ+^vO&ib-IZ-ZY'a;jWB#"H#*#tmckP`m&jd64X/ ֛iBM@^,rvA*u46<gtMScwbk J&ƪqZk:+8hJm=Anb4w :n: K{>T]Alk]A-B De%\CN< yv1B² j n/.箮F昋u\ 0kYS& !N#V~7`wsUTv]{iP`U;H4qDJ\0jxџ΀iga)(Й5t_ҶNlg 7^@ 54 ZQu*sz:?CLql9m(CK$V}mQ^DGKT|L͆~}?U5vUIu\HY2"RwŻ0_i_O~l?x4..< >7k0g` oN~ݶ$`jH7Ohո4z4c')']qc40c[VDO> / =]?sp6}rF]T>e%{uuŵkʙz>I1`>?'z)Ng#Sy#:@ R # ᗻ0wѣ?[!{--ɣ}z7y*_O nkomi=gC(8 v̋0E\~`0?^kE(EUT5,vg@ q@`3KGxSlιnc|6GMl'/'ACϓp* yP8CE mSiduœQĜ)OK+>N:,޻VCv0}W63Y uF9NRsnIBT#o;#`xk,czc,H.IZ^od)(7a QUEƁw^)kH")6F22yש3F/5T+au<E:la;fwgCAKOvdztvZ쵩AOc-]WjA$L(z޿'7n>̳/K,`%7N?,#RFq$ Tۘ&,3 KX7%Fop?qh/Fwcfµ|V]/ԂX<Jw6˜}$8a{I \^XO9f?rs-5ݡOϠY+X+{ .jvEJ/_b0~" p|E6 Mo#zWkc_$G0= 4[do,~ӳV|{q;[M'/q|ȹxz[SkpKr2d .AB霹,ǸV9kdNKwyWHf-Z>?RA, SE%`7(5unߙy?YO2[G>g&^?xo`Eu~Ѝ3n }r໓ͦ<%ְn^V) '1~¿/.?=mڧ< bDZ]L%B;*սefF:XN|%Mmvၾzoih*`޼YDFqw ֣+)#iT2HIsM*s KtEb3-LU lNGsdU,:ָl=7FT92_Y[?I8ngùϴ`n-+.meiz fR§)&3¸1|WPFDbn@6X4%YM gWAQeq"̝~jEUI9!Hd2G%SZk| )im G Gqe6X2Cp,%g29q +JT&]VgOBd6)A h~}?VbVM?NlD4"I #!YːWUTf(i}\d/%"HI *JfceE&nEśՌ ݾ\b]͓b]ĵRdئlH mfEiNN|ٱ#`<[s4{gnw{ԢJ>*Jd養ȓ*$ԉL2Du;T1d_hkrJ9Vb\hU&edso0i\dj[j춌J5[Xmfj ue[/p!Q}AWEliƏ85Y'4<ɟbR >e%BX)kS`IQAk1 ZfV2=B|LMM4$#\2. pNH`*[bWg~Zb͎Clj_r$ *L |$杊#-W'SELI =JU!2$UH5 Y#3̓Z"j4p5qvÖԗnb"VZD["ڋEXm3&flr08Ƨc"p$ &PĜ5dY7J!cqcT p( r3!h5jB.hc$&-Ho8Ē:U"VgE|EzՁ⸙)9:͒Cl.^\{Tr2&g12>Y%%Bo..=6;յ^lO`?'x׮{?mنQLw](Qi5=mf^mR&0)͍P`Q7Vd/Ȁb Ml MdhI":h+vwM5yqbȼ2:ZbBD8qĬ&{i!L>čNURWMcRq C%.DY8="iN'ro2aZEHF8mzpA~ok_p2נEΠ;܁jkNVQuQ_mb$WJykbS ږKR1TO9.|G)JIr*h QL9OBG4e ,^A؃u]ུ $0y -f6D"+KNn`CZFV 9ՠ5z(SHC'+G},sP 'n.uU/w,YtH)%?=:'eOPQmHnB| 3hb.H_ ׆D 2`913E ¸(YG ]6SgBa0_:2 yb;/,xL9@IH ~\7B;?8B4k_׭L7&̞Ztq$٪7$_iKbtRBq2}O"9kc)%}>\$(w[e'+_;6!f7- -üw/O|ݤoxa^6ٓ[EHhf >tҿ :rv3={8.,uHg6t<|a=?tgy4e<@{gUlE=gxSੋˉ.Ԝ}c;ߺ|:5ccxdWt7(+/i{rRԦq*8i͉gysˡ8!h%Ld[:#VZ~"΂]|{W쫿\IAD?^@>:NCg6;e2t?yX2t3-wmӶ}]w~^uLMϟE͏vt~ϩY:Sq?厦gOCfiE.~?[ZE"u%x.%77j\#XC6hSSSZ%J)^v1Na|C]=|}{M}b[,QQ&2"{xP)v T*Q%>q[{8򐸊$E3B]HS9"ڃN I\JT-qs _9n8K[>[MI&<5Y5| vV ` B߈^;Il:^]a?3a1 6nU,N*`iIӂ襖I_i8s +u<# +ꀠ5ODD%1Xo,jǸuN1RH8!d#(*  FT̒52AR8-<$& Bz dg( GϏgh=I )6t[RdY٭ 52rnWK2w75Z=iQڰ̛ߵ~Uuu?ro]SưȦ#UQD Wlf%*!4 Dhb)Ch@рdqFASdyNWzL[yJ/N)%6l󠴂z)܈ 7rX m_ f]m8]֠-B>O6r^Mst~ӫ? Eו^g/ˢgQB톛|1"=*7XńZg#vi])9u)@7T(U?6ӘR.GQfDJMVA8aMf۠$tuɿ}'vui\?zݰ}}Zp~PI0^hQzg d1nZGٻƍWXzc6ɺxw*C4Il4^EP W&@߹GbJHm V0A0hkA.i3# Ty/ Ĉrb,0"TRAh mcjXo[ Y=,>ɻJ7iͳF<_ T|EXs#e5dq0AVPfRu!b+xj)/;b@\--` 棻0wq|Pp,W5$҂1t c$SR_Whu'N* {{Ҡ4~=G JoRҿMn;-6|w-&_ 目n(f5wpxi iHqMYYXt񕁭FMY,o \.ut(kfI%V!xT?-J"k ,gzk3gwN ~e'>lrh6( xh\X}nӯxCtg$X44=A*uغyϻYK"|\5'aUSŚKDZ3p(ӥՕԘIxxE"+V(e2σxP&1bښiUU-dtcȂ;^'ŤE2uvJt4SU\W0~]42ۿ߁AƑZ58^i#_{[zkɌԀ|?ӿa CJz4جqLRPeve6>`ߍynOI٬&)}ck/;ZTDDv  : 9DZeYe0D1^Q 걋Phh R/f&Y 7 x&ST`,Hi6KdhI!N`l3 SOЫEM6ү~c7ؽ$CnXT9I7zqaegHoz'gʕ.cJ(j;V]Ȧ͚6 &S!+3-"LDY4:kx97͞5ًoOUQ*jDiXk㽤J), T.߃vnAXRCQIlB :ɨUQFYJy>%sKO^@!jm7aaݒ>Qs.EiTȯ>[͏9]O K2VEj(B,EJ t^9 `#!9z<`IN_s?Q8\uD-'aT% L/ @YeyYYqF3ǝY)'CKtN9aA'6qh9tU+p/yZpmqiQȆR-u)N[g:,#0Paq܏tiȤuR7]:;K  JA8J9s',&ZiYp<5RLaa6-¼x`_[cӖwc: e.}?Yҋ Rn`L`7^Q;4T^n|˭J^|~NfeJ%/`~9g"WK9?Uv~~&"l7x~jMRfa q= q*Q)ZWdOO!ͱ8?z]\+^F-?ueT.eh5B\+Ҋ}?= KtA &rU"]J tqWoP\Q$`.G\%rqU*QE+ޢH7E"!,8mKX|o/ s?{>Mqe8D^K3fvLX Y0Cw0!ROAX6^}xՏ?.l5͘W !vY!rr= wfb"ApK=7&{dhY#0#XfJqtLx}>qeQjq~3uIbRcg6(Y8Sbђ o8Hq 2阃N1 0a_ZiV>H$vjE/کl=~<".}p3˜OoKoñ BN^t(:: 'i3ӇFm Kn"#JfX[eD@"q*H$N Q@"E- P,ZKN];=f;$(CpF"[D (Pʫk<$5]"(.U9X+cX+B".0R9Cu)$Hg%m]xٻx;IHj DґPC>g>ZI|"kFCa.d^C0kVŐkҞsOZGZkE2HRVa S띱Vc&1P豉h Z!-{Ym*`wR( 6 *V>Qx*裊x ^'ޱ'lWT]TfhWOnгˇ; [x·=8tk\8ǣxfTrʍnC_wn: _pR8үÛSa'icw~* GJ!DN5ԇhR*W %H7-Q@ьތ+_G%BYc1Zk.9tyw|/p@c'3XlU"cmZSp_x*q{|yJ66E[a/3\QqƏIKr[x>Y;N3[IElhZ}4g6Vp FOt>f"oFf02+wn{Of^ ~~2NbFn<2柋5A St!֧>3q V8֜Uos:ߚφOXn\9_NvkTZôX@QR_Lp"W_L0PKjzp6| 3L;u-g>#8R,1%)0TCp$j@4^oHu~"#a)DHQ̭z͵Q#53{ˣB[L zt4D{S(sEĖP~JCJf=xAc!lv*"'N3=ui=ƺukL[^eSERfKUסUZ*.V>z ׏WW]BQl7I1M @#,5e 8 ԋȞ;N#,Mnsn Kd 8u D9TAh֠`3*: FIAP+=Z3SGsy=ҿ{]8l*;V﹘eӵD%c.p$G* Cyk,(]Fq8:K {.0)A-Qki:G]q/Q+`NSpw{҅4mRʺoZE%r*nzKIwe0x!6gSᡚ+އbF8#Nd8>slz<_F0}q E]) f6TM4 J"RĖ L@T-|[ҁͪp(j9\V8m0-yA)ıib Qjmb!!š}}VȍN=@5˃4Ws18C>E(BeVdX ܛLjV.x8\)t;v.Эmlxw8zw܁Zɕ&~Geirxy[#Z ѤFMh +KWh^+_5l/3m~خ6߃-Οe/<5dۃhSa^֭A0AKW!#Dp.`"a.(i%^9-MZ3 !(t.J|~ yuؽ?D8?o7U~ZOx(=)J2&'^Q3htYpV0cQo2}dDя+^݇qt=pq""2\|J{HyJOJlMR.28![>^ɀc\-mc_jiSlyN~^~΀c:W~7%x\sf}SbIO ,.܍>ņmEVj'w>x׋vl05v/^o^0[zu̺켅mJwEX+ oH& 12r[ }e%/!kGPr`lt 22p֕YSq{ZMHVk+vk[c{6 m6Yz,?x f@mW_O:mq'[~r?ь33rk0됂ݿ`s?N<3k#!Tzw: 6[݉GRשeRe@Oy oĤSyϞ1!,;u-+~D F9wU!zi D"3j͂ ]Y83gdF2QK1#xeY7'g3dJg69~z Ĝd@29wi)Q{Rlt0A[.Q=U"1_64{&m5ݟ&kn룺9Cl߄e}ZZuE^xX(''ʕ6NrH:NwӶnJ&Wȡ@BE H3 Rī} w󔞜@;4ĶGy,c`.<2hsqϳ ɀbp5(1BVS0 Og+|j@^4"Y!l=ͨ>>]aϾ1`mz-w^piloڕz gOےؒv3?ЁOfDmABf/j uڰ:&Dʒ1jS*Nf>KajrsQU L(&$8Ƒdf ӑΨ3/:s~5Vg+)x4>z ^@,Kft>l RU@'[W*iPt>!JsΐiRQwژE ðtwB7k}N[&pxwD=Ll}lr^~+ߑX/tbb*&鍟kGthݨD eH)K#17eSgpmŷ^}ޑ#o~t1rs3~e~ŶgZbZVe[.0t[Fѭ?8GDm)[oڿ߿+Sr~'?gz%;?hYtßJ8=o\yvfV4%hg`!EV5bm: [݌_]GN胖e}#E6ihw:'~l]6{ȋ*Nǖ/-6thw.)va7OW/0Q󣃻0}gz?Nf.0"ǾѠ8BtÖ l"rp7}$\z;a\l8K`@l,ӶeL)9mPг<Ͼw+Y[FrD19b6"T0DVC>AN.axtfwyǽb^M ,ӄ75KlS hg%6Ž?Z]uU1AȉHK9$T ࢷXC7#+3 'x]`6ބ9 Ω]ݒ_l;̶kxcR47 q:mf`|nw*Qy>$T4Vgw#&c͎9̂yD#xɸW:$T,qx SU}L(SB0PJ EzTIcZ{ѱџ6=˚dT^T(3&hmI,bX+>Fzr*znJ6K/(5FC8s#<$g%*s O *NFg/zXA6aFc'1ig$@F lue6*Hty,g,3EvkimtFj(p&FO2AtV{ 0%{@փ'^Y,d=ضW hDX5]t_رj賨钉h  qJBqj,, i!g|dh'NhŨ>r2ܻƴ7i]zBOfV0^.\> i`$$HyTabY< B됌lN'ǹvΓDbSx6 dD+kp<y7hyN^~%vxl)Zr][',-AѢg4giͲ'\ eAT.4ƕ w(=N{3(?uq>AsoPZAYlX128llxHL$ٔVa3IɧD lܥ }$.rx4ˌg; _iP3⦝bA߹~BL K mr6tFPZHYEJH@kYU53N08פD` gO<&$*` V㊚.KqfL4KZn}>jr[ m;-ëowki[TqG#m&sޒjגSE>*Jd養* BjiQ3Zg AXϼK*RmMF!0<-E,l`vA6g&ZzL UMqdUaa58 ue,= 3*6/xK֝x $v&܌>\O8bs#$ǬdH"'!,M5S`IQA9cT223ۙ Q0I EMM4M&ێYNHd*[3E |9 rYj >Dя !STQ_Z̕9ƠAmViQ@K}QJQ}-io,1.\,ɵR$39 IxP޵q$20;#!9޳;X gC5Ej9mm~U3$E=(D46h#UNgI2ςR.=mloB}6!ё ţpf`t6 (itЁ P6:*s&rQʄg'"Ekh B+-:*_uP¼Kf{%2\JLks0"kCX!5R (BR\@|x(;nS}lv՚4JsA+2`_\9d$C6-X NhTFIمu/"rg.ؘ )甑Ro=8oj-DO?9 :׿B+R"RomGM tw:|B8Yp%@#\'w_jab?3 ~6fip gҴύM9M0Iy>]2T@usUթŝz%+D|*C(;Txأ! ;%{G?aUU:ój $x[kINd`a0 w ?:pݜ%[Íg!pXO[Qh)U󷱋) tVH=b׼'J.CqۊpE5 k^6m5[qY%sW?4h5r6=_x]#K0'`o=ە яi}«c7Zv0^BIb$5FoЪEi49 z{-x4<[ttu^d0qJk]4r]ڪS{ZV,A|2,zVg-BW:j d◦]1>W?y?߿<o_~H9|?߿y VO0쭃ؽ!|?T~{՚U5Էqyw]Um6yMר9:mOŏo/GAlXcW=pjBT<@ zx`4"̋Q2*3?eFNbM!|ukqj]1/7H$ވ'Ad8(E w`"JX|}\^q8 &CdO8{O8,(߻8sM0ZApX Ʌ)$鬓h&8)&s"_ie]J$1&eelјs[էBt$0_zxp: ?z\lc_N^v΍rÜuP\~Qע)`~]M [4 LgQՐTPml'>k Wg̗O x6YB= p|[m2Ct+kEW *we++C Ct5 ]!\ ;NWr͞]Ikw5Mj=EFя-AG5qc396W0+wjXѸBll3GRA%Dtr:k1.((MoѲAc^_j"@quɇrBUuT 73ϋ/-c>2W?EwZ)(3Qnc7g ۨ% è&V Nbi%~dL8yAW Y6sK*dZc`B-etiKՙ*ٕ }Dljh&|| 5LW>.]mǡP[jۡLi-;DWaѝ++IW Zt(-Q=]=Ab̲Nu.7]+D+ͮԬ'HW3!l KB;CW;8-Ud Qj+%S]+9 ]!N NWHWJ4ՎewwHlv@H?J J;]`Feg 2Bb箞"]i!!+Tg $2ZNĮw6;Sv'4sWtJ0Q5g.sOt2UPרr\Sl qq/.2@,cV)@+ǒ[I|7{ޞt~`M;c?nwn?~8SqBPt`);p37B#T;U؀DOW=0!Bfp- ]Z䢧'HWLVLw]+D=]=Ek%BFp ]!Z.vJOOWORS w.]+D+خT'HWZtco)ZEv%ڟ"])XAYw rBr%t]JkI]`ƻ "\ *t(u?w dؠ1t;f7Z9TO7X+y9,_ֹұ KR'ـ1JF; l@iW#'"׃uxFР J$/4i?p}Sа9CWܪen|ǏauA}e6#peE6+JK.MFH9lJ>/?@_ 'ЫKYn)eƱ8BU5\} tS\H]ປ5zN |[|mPM>VGE[pIU=?>Lfiyok [~BĶ@tyu¸*o{o8Wɠ$8! QS/ Fd@$OFQlhkYflo0? W2/pZ~{ ];&ՁNɥ2C7B>罥B$2Lye,wVz=S\4[C#"h/){Yd1SQf0+QqԯF#{/_z9Uoe<U; xY7ه@W+FǦJ/\f&w:()^虿W:'&Ky6%_׷P2x^cVZmwe^mQ$\o\Zxg-]ʄrȍHb "K F*jD[Fr@ Ȁo19hHƒ ^kbBˊ5aUN3K?Ŀg׽NAe 5:u+u_^'jn#yy ` ^0Twi[k߻$8B/eJ|,Y-!(fU1Qh{fq6kׅQF織1P7e@=)`z]>*(e 583c{Jk\ؚdl˅e.T=>(.aۦ3ę\b*Lo(p8:T?8cSͣNz>3cVO *$SJ9儲DxtD,0s$&W:(ᕈ٨3v̄Mª ڴحxWB]a֤c[-Y`oxe:r)^H`.@ ,IHPpcl"޴ʇY*3 3U hHP,18QpZHI;2v>4 bBbFlM>eD2#{FIAx @~h.v1!T%H@L됳V"CB')8Yi >`yGH*)\ "i&4wi--3bkpngį /V\vi5)ٖm˼h{^y&wVuynC&[l (Fh'"$8^Cakұ%2.ۘ–YaחUUtR[g; ~zIYP#|05u)QH@b ;ϹL*&&+磶4GKbnokpdӝHvab?"L |ښE۽U^&pHGkBnFgY]BͤiBBeR:g"wLxv2i.RV) ߢs`Y e*̫dᡉ'H/s*1%Ahô 'Bk44"9PD ݪ;nS} (R 'L'W zgDss3H7weNHv"@2Jٻ6$Ugwv~? {9'{nq \b]H.IvݯzOCP1,TtUǤ$?vT>yS!2RJQhEskZL9 @tT"1Rtcq)[[/QxЄ1:$1uUmh~K$ɲ JGBE&[-kReA,F? 48[? .ٻz_pgCSD$ού23L(2ow] +ϥJQD+ qiLs8! ;Kù?si Yr~mQ$:++y/s_wN. pBr[//*YVwȅ? 9?cW ,̾Z|WI/gWJ.g߫>rYam])$uE/*\Tr~Wli޺g~T?WT_^gwª1Wc^=ܮvAKmu=7\N _ B?Ib$']5=5?F6h}EF q8d`ŬGDOW_gDS 4֜od%r<,qu42\OQ~"{-Fs˝MNם=\oo/{\_\| |}7u30Y =Fzw_њh|܈}|yjs >f5ՕAKZ1?~3 >cv=fJkUϠ+a($џ~EE_oSe]QB%B bÀ@^ih}6NiM|DTj\F;$#Hb`p*4 W4D>}g'=a!ʊ{~?EÉsM0Z%kN# Y'Lp4R*Hxʾ4ήǧr5q!=bZwPCwwN rS ϭA<[`UwtIXbyHɇ#3el˩5绪~6;x\ŇW\+6_/_ֻfrypALisX>6J1Njֹ <,l!"ODH<a U[x `sM6PzX^ŵ~t=X FY C/*+rVw$Wュ^}n>|?}6`L6Bۂ#W?0/,Zu=lj^~!>c,'h 3_/7s*|Kz !K_wPs:CN-댼KWa=u /K Fii1o&:x__Z*Nd:=TDS\ F[ɅēdL9*=`B4$TJ ,A|l](A)C-d1qj DxS[.JYŜȧax&q.۲5rtOr<}t2~$Dd|<̉;jFԴ߻ly9wMQ7eD%fT`ۛK?en' ek2h鄷dR ,+I߀ aggM>d`8,|tK#kdDEd,PL[`9SS%.c'KoQ*1nU)i<"Ht&AUI !=#ƘxVZ\s?-8/2dzl._)8߽g)jOϾ+<~oR>samJϠ94LRK:9Oiu2XT2xxΥs B >+ B J>2IVAW o{EFgcslΒ&"BNrP%DԄL!A3IƆ[W\-ůuV]: hGˬN:t *gԛ }Y9w~=/j RctN«Cl *)~E0i|W̧M+YPN̅}rb.n[lzů{@&*uznV 19$X+|^,AGr1 eUart3,uН!AyuC?yߌ]6;A:ͰۖQSƓ·JO%g}Kan)캽+"w6!V!efw׹mlpc[Wϭ˺q:{:]'Uvf!,"Q;==ܽ.VswWcgUj=|f}䓓k\/[lSnyLS}xM'=\a:Oy4}O[e|uNɧxW.崍EB4i 1NFQ fB[cr;͜M9UsBTI@|USIb^ˤJ ddN2,PlJGrjr_gF JhU B('OmWh7]rZ,Llxn] VC(F؂}L*>^Lm0uԱX ëOW Ֆ.+p'g"DsIo db>}^T_k^&^߼qӷ _6Ӄ6k["rPZO'a)i~ۇQ[A;QiƊ3䋊2E8RR*&g g_o^?A~QY뿉(!Jm:o [xI^N˟{axmkz }^?_Uwb+?U,^jQ=p?~_β:{ɓwn%([9 \V΂dtJp//QӺy%t+0ZnK-lJ ˞ߴ@TAKJp*(+Tgwf'nw3}rNfaw+B2%W49,3&*KPQYI{7ASz]|*FuȤes6{e;G 5UQ%1II30O.:bm01"Y$h`Za>4g3%C(oFՆ7 ;Û2 >k=jLm,Gm?s=j jVBl7|REZCɧIK*A8TPO~":񊴧~**nG^XLEZG4hegDPV{BQ$O P޺\>(bAX%)h t1BIt10v"ȹ.x4O]˓l6HT1/=#.9{+gɘ ioWw!)qkHPdT:2ո$9(0CAR$}p -v#8 *:mS0 "1\BE\jVP 1q"~DS)/<.u3,f/C]@7ROc a!j VYF܁G|a'-ۀO;|Q|$cD}CȽƏe>\W5ݫ;t #JxB]qq/R]znH" E9m$D) eBþj)@dpI_F[Uϟo u+wk}%IS> Nޚ݀S;UiRqupZ#ɮA.FQstؽB6z_&)tLGJ"D0c:E'O":J'P&viB6@W$` vh,][ %t!O&eg۴ʏ͂Iu*& AL| /Wmlart,UʡNC H`"#QtU'֪O9h\3T)NB A"B&5 9i&*ET B*A@8ѢK=pR:%|} ;Th%4c5猝 玀I&"<0fȉ#R^GUhj% Gy-ϗq':\@;w<b܄򿦄ːy /\sQYj\Mp2.BLuPFU[_M\c疞O ֓u Hc# TBh&@.P*u%3z=<_\C$:!D@Qqg"Gs"Q#$1I\N  [G:tGp?xH,š8\h JqU*&Vp.D;"q(q}k\'nB2Em[v}El /!3/̫~b\kp!Ē!R+t%ƣ KBbZGD'?_R@tPyR( rt*Nh)Bm:^ {)9SBb {-whC5}8#=c*IB$숥9QaSMZMZ&o}')۫֔1"/ysd| -v%oNdt]  GnFzF9f0&tVtXp+Ak!:8٠9V*N* e2Ria$w_-n`eQV& {ڵ,k<l23w|G-cA®Ҵ-ZVJhI93V\10!tuH;/+xjbFX ^= ]ao p+BL[}:dAxigcpƁؿ~d ¿W`09w-kIm3~K ŶӴNihĶc2 D+I=6uylDr"BV"-th9m:]!J!;:CiZDW5Ĵ-M+D)HGWgHWJd [}+Z]!Z͛NWxGWgHW&ڤ]!`ZCW׶F\NbЕ&6G"`[CWB䪣3+C5m205tp%m ]!ZvDiX+i5yXOAy4))7qG-UTQ$5t̛sSo4L{۾`y3 T >2z_u޹I% ߩO|Uee}K@kpYkODQn~OЫׄXj7v ?-]톖(un(;v+վCO) T'Cm+@ˉj:]!Jj;:CbTjkZDWBM+DMGWgHWE Uik *tBK2Е`Ԛ6] +U7e]%]I&!-+Ns\BW;mK_[;:R& Sڞ+Z]!ZNWRv_ϑ4g1/T]!\nBW[Ȏΐ WU{WX]!\)BWV7~ Q6-̢+Ei x>OwuJ(!Dt|;k9r1i&KL #9iQ]аwOƗ%vQ%{D/X" I,,KE09,N&V|Vd?Dbj=f!2'aE""[x-Hd>u22%Ft|\^Z=b>KqQDxp9TZ>-Ҩ_Ny{ Ve ϵ9(cVJ9TOl13 yTK;Ua}d ?/.|97Pß(#^+Dז2\G^ n><,w G %ʅgAMb:/k.V$z[TYna=m1 A[cL#\њ?D|GD;c1C_>}TN?unpʼnvCNeu7a*t;wМ++l ]!c>]!]1aMAvln ]!\FBWV,KN(m]!m\Ju[ JtBMۻEJHT sFZCWt(vphGW/BWRZBl ]!\՚vD{*7PҕR6m#`[CW׶F؋t(Y]#]i('-+۳wpj ]Z]M+DIEGWgHWFZaИbe y7RY݀+;6='e-'_?RyL6y `ݞ kZCWVATЕrk"$#ߝ m :1]֞(b'4ۏفLGW[GC|ֶF.Lʠ`,ӛy Dco4\W{/J(7d ݀P<].jV_e)c{sB+__=-L: H9_^/7:?Hw(5_et{ A1 ,Zqoʦ-p&`J ^Yu?vE͆5cf]_Q?x<ҋWN힫&ܢp%k)qf{Hf0֬'TX6" {n0)8@I 3s{  ]!hOտwax $1_mq/^`\;&ocd砶DI[8zhP2+ qX ߤ%yZAbpOpYI#t՟zⵛ"_կ5aL Xv4biVъdNrJx%uDrAeD(1- ^eyAM$Z)t!us"0L6/,D2IT.[;j6u6C#%%;{?ѠZdBBL"R5bl" c.ORN@h50#ThuJ.%Pfڔ"1`X*D ΔH]rgW3-t<'bWA!Td)g)C+%hY S=hR2iae%B0'A8i YzBT&3 cfLQf1 BpXуF'"dl9t  {)nh2:b`gP+ 5טǎp8H 9F0T0f6M~V`aVIbF`vB+h3bRF0P)"@x ЈĊ=Iƍ?dbq 94E5 Xz'^ؠ0.)e͍(Ϊ@5I> Ţ .e-%# h&[CR@rRhyzC`лΪd)q%.IVgqXag;k`sqZNDK@ԌD.4$ˤbbjʔhI|Uk$jgh>d2n.smfskHQ.nK@ug)`u$ .Q0 Viu$AqKwAT+75R(+)+G$Αeht4n|"k<5spRNY0 KU h -dU*Ks i @&^pfnLL%&"+-dSxoS,.2HQSLM^X< uf$mBؚ P!4 1gyh ﳴReB0H\̃bJѰ#xb 5dѮTmFnSoȠxhzH,>P)SnAku utWjĚlBv?SCEAl1VQJ껱 tS-2IR9򢳻 XK dfsC&+;B!{Cݕz- 3`ܑLAA[@b  (g"%yffT[S+[KqC zP,,8:0vL?K1;Z8TRr! 'n!N b=>* P$&=a]1"ƇIs= :t8AB ݠ˽jbA4d19#8r>' rTI0E!&d'!a {Ʈ^o7S-xZŨȏ溯@9&fNHB x@pp0Ld"JN2Ni"6*3*}X򘆒'!Q)uhA E2!$JEU28ǽHˇrD2Q $(qx}MHY[x+R! A;XTu,ҩQߙ&3*Ɲ7]m;V+.I NCO?,e$_J<>`}^q'k7T+`+ y҈&xƩd,EGsP6D2怺 sI3'0ͅf?AJA(CXtQ/Hnkӄh1;6t ā:(@ERT} =$%iK5gG@3:saH͙PDdRyd+];bmQ!dMlM52QZBRTЏ6$j NQTDkzs<\:jR ǂ`E?+lfbl_N܈G&W;`? Q,^gf2U#@*:Img(x^ޚ^ɨ"""̌wr#%l@wB|n(( Œ, ֛~ ؈v=|_^At>t/^_];L&/G4W<#uw4 rP9&9OB)fxjZduJٺN('5O9vPU1Aʫ opkNA͓&AA%"`aFAM4DsCyr#cEtJD:uN Y%-(H Ȯ(=zz9wQOa/4Ճ '~ۉZWHܺrne!`!#u7Ŋ^dW3a‰"+IYTQcd$.=()Fr9 8!U=X/נq$1hlz9'ErcbÇ35YnM 2fң!̵! jR pMe d;ڛLCNf*~@b :Z3%2SCi}΀x1O(ɺٯ7aU2:NG=5(TH,eJi%'< JbOɇ٩MXUfO,1ЗS-ѱIP0BE xqN-( Ċrw'tB!(︥AC  ?n?_W▫ۭa}.xv֎4Rˈ/on(N{LmLz+ >n?o~i7do>>۔vU"yuwtq!;˯WW>ҋOfJ_ޜ|xqhc>}ƫ'zs˗쫌|˶«w7?Zs-tsq}ogFTd@{_~fsgh.o.g9aW[ L3ސߙs_6r~"`y$'?rc W >N-:bN_YH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': :G4H8r] $~NBh;N /|Ŗ: N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@߭ZljrAv'}+ߺ7< $*sP'@λh N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@ڮCHN E0N <!;H@[t!IQ@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N8Q ~qk}t{/Tz\qC_]{o׷ջ˛'@7P%Ɓp0LJԆg+*HsWiCwDnfJԦv\AerNqA\`CH J0asWj;)a\Amv\{D*sʟw7d~v=]r~\Kt||q6AϮ.Q?yѧ1|z}yu}s^_~^vfeWxd~AC^_]Oq[K0ϕ}޽nO{0'okw2U˻o.G?>ǒw'i^F̿]v 0N_~@̳Q.OmIXܪcOjY筶1Z+Hs~,'My]%f'MٕӓZh+{w[Vnd.Gϻ|rO*LfF~02h6XY=W"0>+QW/*u;U~dr> ~xterÉej󉲫E*[Yv*+Dp%W"7(Zoq%**6iv \ȍ~\o{y: )6+J4DpWK6+Q7?W2q)[un\ZZ}1(*S\mW>ekx \4 םЦej];W\mW!U*Ȁap%riAQy*vpsp3w%r0+6~JTfV\mWb2R tz&rǯz`Wœi;E?a iLyW,eo#gQ)E3q⧨OEÃ4~?y\[ɗ vNlX(IqPm8)b UumY45WW"(>Wres13 +0r.3Lv[׎+QNqA\YweNi<D-/TKϮD%q$j+vJxgBޭW2xq)q g]aAQqh1E\l@+x\A/Eeʊ *wyv< 0ŠSOǕԍ UlHD3  W"'-TW?w׊ g&e媡v́^ .EfI5rzЈW -l{e [qwLa@"/~[2jSG6}aOteO>_Hn6.TNsB2']+ڦ>/Ppv rQp%j];D%yq%k•i\dFyJVqA\Q[y*D@cGq.)6+)ّDpWWq%*9+6+mj]'W"73wɮWұj 4w%8sWAnvJ:v\JR\mW1ƜGUɜŽr9+QV]ʨ"toZ.x ~ܟ{-+cIy ƉIE-㧨\ۡ?%~DƁp%+ͧ>RzZ^섨 npIɜW{j2ij~}@pL]R[+ڦWMW"(ެW2$qͧ}`glW"(֎+QWς+L~ yJZ? Dڗ\TYp9>Bp>+ 6+QW+Qj 2T BJ< 6v\JK *@sW" 2(jyٕ^qA\%i#'JWP׿ATZV\mW>U?L^u\eukeQ(vفniYMzˈp\>ejÉh]2IB pEmzȇ< 8|yk\Qp%jOyhWĕs|f \A0QW"7(ZvWjW+Kvٕ -⊉\ 0ͣJ~}OǕ-SvbPW"73w%j[;OŕNL< 6կ J]&1w=|/V};$>o~.gGf 4~'?49S"D.scyu4ZnUW_ǰ5Gȍn\Am:ձ-ɤmbfp9.,d'(N:7Ӏ1T)b+t@Hm m:f^.e5 vӏ&+oh7 U?: yV{q As3mϨp y騝`NZyQ|բ,Ok@@e󫓕XзJYq:zBA:1'ç)K X_bTӪ[e('jB9I2il*|M5#hY )|t 3+ni>ՈrײRzD蟔32~SH8GH >,$)1].+}/`|zi:=;O)2L(a88E$Iv7eaÊìȽ8K~xpח|]4dyY\u8+'YPs Fi4HT̈h= D"騤JP6I#)C8M(rneZGEd5w9lVJHH!d/#}fqx*:;`lqAt nQrMkL&;7!L9j7۫4vgecgC #=`T9B}0ܪ c FрG!eLDgpݛ8ZJk\6+/WtHZ< _/c:q2C8/G >Nƀx5[z3(TgYoߘtG0 烣Y#9K2 BX9)p2ȨXυ#CC u/P7!dOx' tHȪ} o܍KiN(TJcq\SJ3D"!RFy6}8mnڵEk;hϳb20!NCz޴yQr^'1?Q,GUۙN(C2tq/_ޏ*v9ko0}<ҏC]ڧ_18ӆ /]ؔF=RG̪ òȺi2Sbם. o 7~%= YpY㈠j֔`]_^a_\w aqoSi?@_b<@nvMuUÅo[6O'[{)5?-7HԒNKlyuKyKfm'NO:uz#Rw3޶Ą[FMTU\Pђ̷E[w]hP$DkCnڞ[Μqr6yi2 n %**5mU$ƈ:`c=3\0|@g):! ᬇ߁y|2=AƂt1`cD-5 csFa)Sqpz8.eU/f~{}{Ӎntk;.fcY@i{q#!EfI^g+M^Wg.Ix.<<[Z?,o2ƨ,jl"ƆO[:|]Ux)y;7>|rC;5}nnϟ^+k)db05%ayTnDh:rݞ5b@`4`v$]v0Da@NaڅU} Ys:c28iM4vCVfZDISIq>csd:30Cr.%WQ# fL Z%UJaip^.lteB^WّrE؄ ACɨU1kQRk |h;ѫ콉sGtT<|e$5*|}:d:aX&/Chn츻MRyGHc}oX*$*RCb1(>Kᑎ`#!9<}H8WliiLԥ^*m $q(W^)ǝ6*ѳ&} ᬎ倌EāSr3IKLhJ.؃2'XXCaYHb00f4 .nBF+@, iS*X<mg1jiћ<(b>>bUzD] uD 33-q̙z^7o}ww36nxݩH5:z7pַUi:-:d; gBZ̞ im xlC3KFD¿~[[f-Q{bNlt;E8`}$ƐXD". 0qؗeq`W .nby\gA;z`RFtifRrδ VZFO_*CpJUXX`o޾۝qۚȞ]i,_o w Jn6Pفo~LFaߴ,}HH€\Q5ǝ)^?_h6S/h3EHg.Q ž\,@?I-(q%_߼Q~Zi˝S&,9?Y7`d+\ix`XAI1#G02 Y`"Qp`u2` !l#(c6pb (#*uBn=hϰl.-`ҹWfIB$ eH{97Fyj|2Ƙ!4$68iZ0ÃMYs:F3S;g_.k˫=x~1we1;OܝrA$l5M3'ꦕsBB}#GmjsiIxk:ϋyI0^ saL87}Ko; $jϋyΝ%ChwM?{Frl /p=V?_^{sm$k,0~Z\K@R޵[=CRcHjJCkV kfNѭ#{aX0V'{TG 6Hlξ,jp$Ȭuqmݫ$_A4ąԽ>pe _I3*Yy 4p eZ$2긁MLѳy~PgLDQ.zVᱻv~sm⣖MEXD;ND$4B@B^&k'pյ쯥<ӂ1$j+|K|% 147_Å*6\U#Fi֕q$toVHTv9)aСڡZ*)V[9H< Ƥ%N' LɁaE={k+hfh蔉+dy.ۮoӶ( ]oCe/g9[;/I,?;++wTkTpq%r 1Grc;b¿YzPvh(ͱFD5&H]Ҟ-U1%\'gɳ=dzԶտ9eeN:B΂ȩ|Hǩ4J4Ȝ* O^p)VֻH:*F#^i&ruPHY}QT<jrEnpR9V#I1q4(Bv\;P WN.o-#bj+F0qr 11=c>$Ң(r`wAo!Z橏PIʵ*N)}aE-&fEsj4N*7ٺoŗ '; . Bqphf/uxK`:Bngu99#KΏmRvRcC^7{Լ fJ%n 41. e69Ǵ%D]e)*#9JjcSZch}X[J؄ UqbXXlf슅0 K.33ni3_~kroNt#6ͥ"1Ά$,Et ehOFE攔VZQETD1'ǐ=eRy NBPж#$&#JFblFl7+]lvڪ0j{n+ S !pk6kgQS\S%Hcr+!H4|.)7N h!#3DUQ5!/YdK$1 RѨʖ.&f QsN) b(ֲpڔfJTX.qd?g(W(ƟGՊ #Ay~0y=Ӑ]ή ;k4kpɢ7ڂ;SA*]_kNɖl%dmyrkEQuՅfK~Ҩn!)[lw^>P]>j^y|醋.l?xfg._vu嫓7_|xu+~>D-OPmVa^s~ ahI+whrj.nq[ڒ|ut?>Qp=-7=mY5%&_wmuQLO 8 Q;E+d@PYb Ww;r+U;TJwfytɞREJ$Ũp{0Vsl-5`A'O8ʬee&P^v4R(F&9<҂F0Vu)q6oUqKmv^q~ntpX9l+6N[IzfnMSᏏ4U LSu|T܈|Uoв8X)0AI SsH]} !eAhADv?Du2չ9"DVI^Ҁ(Ge;m#CU2,^sk^|uWq j/ׯ՝99,I4YQ zb}p֡eh^qIwi&u6Q[zlW$)_u]qU]ޖ+bbTʁ>x-uƾ}=AѡҧTL㿳vN#pd$N(PqcSsY4B4ʣʆ){L@G<[Nzcy80JRQk")Xh٢ٜ^φ vF4K2Dz%LǗ8]C9ʇVZ7oQ"bT2/AZ7M/4tIS%H˩!Ƥe*FHDfRǤ S)J#Wf(~49bjA qõ- Dǜf*;#tVڄ aI3buKʝQt h&d.'3,OĽ,H< )zHJje%uH&gZCמCpi:V qr8Op91dbapwIo ~OipT|y)/'oeG1ď~ɗ{ ػƯOV x/qqUx=KnRE Uj]Qm$"*NYvw^|2н.}g{ں`sb$8(M>q"(c҃ǩ΃dh*.ČY˵pAb"4q0;c$:"%-uڕTJ-$nR%x koM"+#A5m޶"\XK5wk9ޘ{cR")?;^V9(Ru;W9XEtDD(|hJ$E"#*X 1˭_D餁Bz%p[,Q́sa=76pM*$ <#:j4ĜS)O#uD.Q7t[oiWS[TYñO/DMX݋`O}h8 ?vXʃcgq59?6J ;KIe~~ls3)A \m7R 7\j8 B$WY\WYZ]+R+0ҬD t(BUC,-t]e)eϮ"\ c6W(p0pUP =U \گ< B5gWY\WYZɻWYJ{zpsL!Z \LKvk*K =zp% 5Wr6<vea\TQ1(( ^,罞>No?ݺZ$2Rw*Yg5#]if2 /eFJaQF`w,2hCh46(=3+giUYJ-Y?喯^>'47~t6cշkm$V*C8box3R4;1s)qDR{D/<&}{nuw3OngJ-tet르 IUT誡5f骡@W/DW 2jpvh-;]5zt&o'DWm!u7jp ]v骡<4/HzBtLlpd誡u{?wPzte&DWRL~b;}{@W_ּYlN3JUC{?t5#'DW N3>F[5bUCIHWSRW U;TC@W/l{}?gW'-]I?v%{^=f0ճz6c,Y"J]-.Qkӎ zy>/7cW_-s&Dwo{V$8<"#2[&i-l_Eo;r>Ԝ?ʳm}1>)Ʈ:"wI6: xEӶ\/epr]p/%:_}֯7F r1YSUvmspDq"rp?ޗ=-WAnzcP?tyIa'W-⎕O ooGZ`)g[ShfI"?g7 " 1%7fَ8|{?1&cMzF1$*7X2d bIxSvwxèBΛ4Cy{4Z(nWՒ_NC-YD-Njiz\}lŪl#_zgJZC޴nԻdוh&Vx %>?C[.?ro[ڙl;rϮ3יpiWK#Q؋UOkP*> +eH*=!j ]5LS}+\_@W/n} QW SS+mXR]@"rD ]5,BW S/xJ+ā^ ]Txty2t ]5;]5tZb7+̍EW Sl(a%[&3fWOF]5fsW/~mڧY! /.@]g݇w-ͭa,Vk٠& s舴ԹވXA$hw.j+ޙOL@;Rrϑw7^yA sHuNրkX&Z3At!a\sU̅cP3ҜJd~KiRڭv;|%V}JI]o\;&u{?Pzw܆wG)?VݳϩlW祫KWܕ}ޛ}:V_G2rEܳhfBSټ5aQUWXOOe77 =(#/:E+{ ~aw̦:+μoWS t#ۮ3z>%>6pY\>>O:`]PG_"W] Z+j*J=Y-bQ[i sμwyw!)GaDG߅io?[,_nR쩸EhiKF#0C wƪ5wt~#{C$G0 IkV( JYq`TYdҮ?24JcF[hHmʙTK,L5A*( 0@0DKɶ-JA*'\tҗ%s3$*Rasfj ^o Zi5mmgTCT1)G-W#LYUSYLaH E=r! b!$3X YM1Zs#I(C3# <LV/zRefQ !c[І-&4)hR!TP2 Wa2!?s18Yetȵ:Wc& _ Wj l+E" AX]Fv$Ӱ\]C*pB*d%}DGX#dOv |œCoΛd˪$YXɪR(9QPJRU{' e J<CVG%ºD7)GYGW(V RChBF7c6&J $Z/ S6梌gl 4$ԣŤSҎDd I*l43rd(6AղRȪPB@"тW%GYQ=T"0eĻP4  YO`1G'kVPQ@m>`Z y9xBY]3xT% 9UQ(ϓS#VwFR @t9%B_V!Hb'#҇*a~8s`$] %XTצXktdS(cѣP.%fȋQ4 QC/b](#T &)ڽ,C)9j06y-AhGM;fCd]Bxr$jw,5U3`XK"h%H=^p thI-B\9v`QmBgTɁHMF|d@)CeCƢ#"kTFWEܨ2̹(cD8"gD(!hv")j2?@͐jY4Mk$$Y MxjڀJ)gҴKoY^"XAPFmw $aK` }A׌ieL C&lZihlvnz}+¼/^֙6 FBjV44z:`1i0 vPAhPfu5$ծ祪)9%Fs@o1&Po95(ks8ݐh!/QC|̨ty@ᑡ%ܠf=jRʥP ;6dB>~N1b}J\S|<仿)Q"[ gu3X=zpH4`z%(=&R Lrvg ˢU"G6 ¼`QTD1S~rb`%Ց0) zyf%` /le1,ɅOBUD >iL6mKXLYђdlNeb`s80r=-)ju&Jb`CY*k&IS'6$@8j\OwnY)ƠLXVB ›RѦ)9*OT va ڶ:b$pA- `=Еu!Rz)AqtA#Э>>xq,mdPk< (,i(>&-)r[S.xnVKcjϺ|ࠫ|u"" 0&',k_&H<.pxۓT\ F 4xeuCND4hoQj`0=K*ЯyIsRK*V%˕Wހy&Լ8}(" ,9R DI<<f:n_y:|xD1)0l)%CNS6嗪`қ|:^4Hy2ѻ\O]1?Ă-V8N^P2.OǫJ^ԙFN:%'թeӁ6&+A&:qS dPC媬xK 'c# ܀!`H v I I`@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H}$B҉ dpH Ι@ {O+" H M I $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $]& j =ĵ`H  R*$3$8T $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $г%8U` 3*9; TIHqDL $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $ t=^[ٻjZJ]קYB PR7y0&@%p%^l.J=RjEӗ \KYivV8{K7n k t{F]{9bx5e9_q= guTL,2Z;2eLk.4?d62^m+hzA@RlojW4 WwWPMj4TɲĮҪ]œ`8 A8 `kUԧ:/Ch68G_p0cWjD8U;n 3a[&A]};w&L҂:]e, `{]Yy&&l2gZ0.iGFH6hZ]|_;O%<)--񘧗e\"_z3uq# +-aԫoi%<;*UMu1|RhV%mST822]FkiK^ R't.3^%vF.*~t tGF}W_!S{w_`,?}>hBE{]~i/"I/T0@͏ M^UVܝ^Q-/~7{MjCqd8C&a Ql u(p˒g)yic!Қ8w u Wc0p`Qeq>&؂d)Aw%.$-kP{d]2BI c Q ë4iZ1·_oXvL9{sVݻ;sW5X4pJZ0P%a4w_RAt: Le2Ria$׊&UF:iN7p^IK/myQ;_ljxqOcWm#gf6x.x\0>j:hkQwruΟ[C3_/}>x7ӟ9"`V5P@Z/).=3jӷ0Cbϝ `FqwR}I1ZRBI 6s^֢*ݸBo_JFw1>cxbɮ1u&f;=Ɲyg:;twcGZ|vrkzݺ(qZڃrPzK mmm6he\orZ ?_6YN.a8zJS zyt&p_?Rg;oC%Z零}~00 6xKokwwM٢cD&RGj?c+e)8U"9^y\(a~JVF4bl?br=(k%rS,3 QdeDЌC2MƘ59Ud 2Dfj}8[ v&nЉ7kPNx5f.T#P<p_bgϲ-g;S f Ši-eePdu߁hj: ޙ-IJS/v7w[nY-FeВtsfŮ7ȿ13/}ph1[E.|oтG 4&TVtVXq+j!Уy\{ucE_'Jg0Px5.-F>~zp#%|<] ߖȸ껅ɾ9KyG[.|5m@sv6q{.:ҙ)c3(8K.ʹ-xPLlo$ޝ=~9|gCk&]'raZ=T~8k U4T¨TYmdT`^ VJYm0e%=rN>T2J#6Xt$ `g#KJЦ:}F?졲YW0 8LMLP&r$6:*Xz1LU0O!d-cqpw ;Jgig7x[K4gz7g> DC9EhTFIم5v' 6f9e[`hnvZ ޓ^#tB݁tTq)Az£&qWΕ19|B8Y0%dKcיͨ*rGXd~ sE|S pv׏M??Ҥ3PK)QM{~ M;)q濣롆:aZSBl$A@iOّSHgerŽ/YMptPǩoٵF/Zh]*gqIfqVv!n uLrpvZzQԻQ-j硋)v\g>sjE2%fϫ5Iwrq]ₕ#/I|v ^9Yami]QsBtiJ5w24Z~kT_x9>xj]F \AΛvY+j_: Ooor.m^bLaK-|8WxEorSVFn~뼐޺HT9XO19pʣc_S"]zL;&~MT*7 ˝лƍ>Һ5q'?>nx~R{&e9).,P%,~9g){U$x &bx@?k*G} :Y\S%V&EP~&,iqBtIi49:dz/b'm[{wu؀kYijԝ'IݑT?] a<9Psfʀb0K[ꬍV\@lf2+Gh3A(z:l)!л$ʉr,rhYQc:g*E]B<z;l̻㾁Ml@J]}q>6puzv}_6ϙUj^3t`02`&#Tsf"Gt2i[Lw).F[E96Ff%c0 ڛ$Y8yABj5\h:d/"ݽfB\/v] nmhS #!Vn:9oYنkz[ 1<5%f$ṖQ{mcQXSI%֑ I̧ v| wV%KiI8.V1f&0Z3&laky&#x p8{v|=b.$l}7y_y""8`/jgw) 0<㙝!<֎k?Mp 6+=>-ĿWV+jCp4;gs2ƿ]ѽۙ l/_&B9SQr1ډDx'#.&=\J\fڔ ($Z)ϑ -wVz=k 1u#Wz6` *5>W!yQ$\hxT*t)CnḊ "K\h5 -+XU0 %-?X! D?{HnaGzG5@$8b{Kv Eʒ"ɞu_2e;`f3R]V1( 0+;j5s;j.aT3(:MSRr]͓r]Wlr]_>MڒOM4'gmpRH%MS{L߿iɪ|@TDN8*,Fe>u̻bȾ.d+Z:7=)h@ $\KϹ j#c5s#c=R ͌SPWBaYpeQ]a;M̿lwWMqw͟`yП~͋Gb.xJ$r"c9`O%GE^{ЎAP$rjI+3m(G$66QCЀF84b;f]8#hleĮfa!fǩm*Pcf R)T yHȂ'SED[e*!3$W㉌15X" 5Tg'T"),9mPlTR8meD"voD @FNFx1"hOBbGfT01g YMȐ1mb@rQ@3Ct\"iFz!JʈX͜#$zԁpq,:͒SqUEb{%NGXOy2)urZF &e79pR8񞍇gUVYmȣ4/5{~tD\FZӬQXY xJ F(0 V(,I+oԾI_x+$A.M;^킓 ;c T%]O$0`.D .,;YdL ::GeLZ(:0 Z a>൫jsKx4LJ9E*Hs-(0"HJr(r`ڌe ɖU}+ou%$OQ_`EX${AIT)KƒAh'rVlkoaOʼnb:&敱 q "@.LQ kBY!7>8UA95<ٯMcRq 7KDY8}bjNro2aZYcc4oz17XH-lk}3|@ˌbg^>Q6\l xr㼵 S ES ږ(%/۸iupq#ƈxVϴDK( @(FX+,4ѰLMQ*עD)~5JN`] )2M)BT"U(^Ȝ~un]R\H|'dYPe:C* y"Cц,jץ20%zBѝ›)p5}j'9 ھvmKwfHT{YHS(BpL1f;AJ ]8z0N)aBN!iZ*X,d m6%{J~dMWOz?KJ ֧w--<}Zt&N\J& \)g*EWws&8ˈOJ5S6 dl;4 ۄ{Džmv6o7_ۋU2:m-3rT[>Du vN+:rzZF5m:"m3)/a˨Xra\"79tu@|f; Б˪w{Ã/yg=I:|̜?Jl=x^䱡ә{]|\^c|-xkn=݈Ͽ`^M 5}nvڄTN[|y>\&ZMkY 7hl$lJU>A)zJݖۙo+.T\j/bXlu,K:"*h%:引{ r.Ru%MJ(r[đ <\E&3]YkKe-iR˜#|IZO͜ݎoz6U,d΂w+פ4U里f(Nb:F|U l^?ESc 67n|]*F_`iѨiARK=NdVjDdeY(F͑QI.+l c:c$G1!d#(jƩ* i#*CʒΊgĦAN "N"D!U&9Tu|lƓѯDh=bKeeڅ~m.V[n2@M^O`SXe6dY+#ty,WR>yJsEu)vn Z`A̐`[8NHȑ i(E}\͜Nb]KJ9ݙwqaf~ǫ}TkCTa1KtŮ SmNg?.ӭGnGAF|_zlMUYkyn4kYiէEkf,+giIh]7Y6<$w>Sæ Gs"d ɷHmšcinonI߭b9}nQ߷޽| /O|Po{sbn2.޷7_&\QyL"У8OC\J=doGSҊ@AJͯlr:J_ZMƃ"*ChH!8M*9^um/HݝjOW:}nzwEc\JCl6v/ZzY׸i;di,>'[/Vخ!O& ۤB?cj$|6M79dף?2}.o~2϶ݏ=+ .kX쐻~d;yP*\MhrT-r{.| S*N)vEp@7`|jHxHɪ1q)&3mP`dGΔhWcJS~)x4>z ^@,Kst>l "Nj*R|zM ;8 etS'2m.kד|ӽxb֏q䣣oD1-}Z r3^okTu$\ M4A6 RZT.^E Mλw.A}r-hI*dDE l)悒Vr٤5@i`s$%R,c悁WI6u4d4 \-,Pbw pȐxZHhpLYAЙ`*YKoss{,A :dԳHQȮl=J|^BH7`0ZDAC.yP&hE L.s4ls[!)=љiLdĝRNJ[B&`:;o2faLD՞i8kGWaMndz}&% cA to ߔH?~3/C[OlϽ<p7=8hU'ذ 13ץF?rݟ܅qt8ᯞ$\h8Ԏ_(M׮!:O"ܔe||V?/l/MOo}I}ٛ> ciAwmf_6)hx9N~U9`@1چ wE$/s.97`h'V'^Ɇm1o;kcy{|2vt;LwËۆ̖ {/a׸Ҭ],ZM[|k %3pR?^Q!s J#(Po\"k~EZW[l? IR|Zm$FBVvg/s{^ZNV ȩ"x8 #`$HaI`}."[_'FxRșt9ƓնGlҢYɲƎtTK#Zc8]X8t>M`J:GMYSwc-.F7ENt^y@$'L>,`1x9sH`D3WTm^>NR^sj?zd@29weCP/" `\;Ыze|LnB۩܍jscc#^/EU95+‹Ǫw\}G.C~:dZ5|b+;-j^NV_iub_ة'#"%gJ{:_!e Wz3ۓH0̋!R-qL IC\DJDWi e^vRtr0pUUPHkUҙFRβz5j VO{g) >έḦ́$jbbrt< 3LѠKZ곏ɘUVA,gG;uL'3WqΘg̺|ڛ͍~iƀE%ڝv)H3z߭f1e-S5BuA*/TF 9 "sUPUY(bqD,+nYnbHv?Mr2\Vyʠ*)ewt|U9y9E`v|tߴak8eV|<\v3Y/epx9M~Ґ'7mawPr;%Ċ +:q(L\;+R 2/ 3sWx|p0q=/\=LZm&WpZv깲B"WE\WEZpER.UMi꫁+vL \q>**pUԪ+\-?*>Ճ%pUF6{z3u\,KzX |T  )w'rOo;nRvϫ} vA`44>)7˷m MTr'0x嵕a('+?~#4P qَu5>,'}ݹn*_у78oV]bڼ h/,!lb'gC=7=2 eN"'$->I:MMHryW %wK fARg0PSfsbUO=&;Ro F|e`*dɩ 0lx:^DY4Hz qpK5u>c3zšw:˿M\ Mn's[}𡿆 d۵  W.Z`y|a]ٸٶ𙈡 c9O|WWXʌEVVAT. 4`׆E]Gke5{>hҏs=㒲,[-%e:H}dg%BdJUCJZ.;Z~#{e6X^R2Cpd.s>'O kL o= c*^zxuN& H['ENlD"I 'Ġ,Cި"0ƹ& ."/?%"HI Jt*c +jcܯxV!8Q^Ӿ\5Nu*%*}_xM:~dO8 z GlnJ:OYɀ"6 5)NiI-3F[Z=RؔD AC2%1tdLEcFĹà(澠vcc[6 iQ`Z+'LJW |$杊#-W'3!"(U!eHƑ15 Y G(Z"j47s?֤~ bcc[D #mEĻV<-t@ȉScLDU# T01g Y6uG̐1m1*em8( rB5Y.hc$&-Ho8RaDlL Α:.rYgcd[\t kq;,NGĈ-n!cri;Z@ %JhdQhY>Mjq1pq_ձ% ,N`<|߯~hyiA\m\풰q Z~TLمFUj5?Rw1rƪ 5# hk1RuY#@fZmd Y+$ILP2ƃJ*f,X& DQkh4jm!r!Y*7J6W<(M" Jyo2bP2{F&4iHJGh+:}PըضKC? L/K01JƊ"U:PYù!@OZi=T'IXWĬۄ?uTV<,T!2klH;IQGGTi,6'ܾ G@{mR,@d;GɖD Ġv RH*6B* E O !}D"5'NoE"RfБD&EW-w3s' %W0ICNAs/_$ ǝ:f"!I<=)b˗2. }QF'>j=\2_x`x$^sQI(2Lp%Ǘ̔p.hx4&8ƞf[Mf K2Wkvsq'n<-gea~j8-\~u'c+.Rr5_|p{ nI1f׶juKQ;Xޒ"&j,XvAi6У6]ӰUnuufJ^CΨLGJÊ?úՈ+wI ?p. jZM^Owb~W|S|߼=žӷ?~OhKu"hM/7mXSMS{MMz:vWv|oHw_>~OI~c}Q.[wMbbtN Mj~1ɽuT)eZxjdClg+}Snrcɑ$QkINcYaPă0 m*1"P=~Zj,'(Ξ2<fWy<]Q,ʴ7dを3FTy ud{ȽWؚط[dvm}˳¬j]wukøAgaa壩Ve 6z媳^5ӓJ|r0WeomJ{{a|ftIb=j0% H1ye4sd .AB霹,ǸVr$+^q_@~ز 6IK8+Om}y\O$:r7.[9T#6R*h),fg5`N4?@'@Ycl,ܘ2FXu@ gVs>P҃BRhb odx;$:%5|-`DXFb哄{I*J){6{ƽpEրE)>,zVx?P'<`I ~ZϫMPQo|:1 ޾jl|hZ!Sֺ9K@䡌ZHoeb'H! oW|clW|0cU,n}S|0 SU`'.GՐ?v!CKJ<N6ta\ U4x/`%HWhf1z3)tG Bs)cToNluK'yq@dIUm2ڔdPWŧ|8$7lM֞UG>rhT7(G&-kWj@=q16/t e]z{НjG2;Hzv{o d %gDW{yQxwJdJniks}wϘ+Պz6器لYl{9rr魷ۜϊ{ pt' =myY֜8S^_񪍝EJ|PVRHK&/ dJRITv$:q88UuBq$\n\N0('ys+^sm` yY:!ceqc@2ʽr8aFhNPƶt%M.^끩 f.,ϝ3KU֊) }0v/Lm0ugALW *6^3?Y n8 $l;=p:s^"L~+zpgX??L`-.!h`8 LA5 ߼.jgٴfw7yҎ#eʙ:o{+ߝyYVόr7PJ1W^m+ ṅu7bJwC[wgMN|Me.:u79uq"r{&%Ye B#3*Xk|Я|(Q}30{*7kuZ̳ \}>~#c.p$G* Cyk( T,&dm(1,ɚD'Hkƀ>$% IAdh6 [稫aamPWI;0%zTh_mpawp[&1Y7Q#rGmu͒=j[:W4F?23ICt$ɒ'J L< C2ҊH{z?SIv=JuL$" +eKl)0N HЖRK-ψ`^h`=UPr(EX¤@mc}J!tE N&x&2iK(uNG~}]df XDq|䬇 Z@X[ͼ#QS%Lת|ٛ )[Dm4D%!0-0? aͷ8. å,P.!;P ėَMt~g|oؒ/#%T(`4`K-"Lx %W6'׮j8/}K 1K)`<,KIwx!tcӁG&ebђ o8:1"s0ԩ7Fn ` nlJa^iGj)`sHyAb1VٖRkq2.I Q[dr#ٔhrc,Hg#_oKy?nc0=:=s5.!?Hl8O,9VNƣ~ I{2@JF/L4aw Ȳ^0Xz[xQhh˨_F;+ȣ.Hr̗8u* A*ʏUMoI?[F9Jy2@ 3Q\r&=`:$ﲯLhXh W7mlaRk1iKK,-bѪ@P`]o5psnckS6z@3*$ҖSE1'Z!qcgCuqG 4Sֳ߳r\ݏE.OϚ<γUkN݇; K4qQ,ÑGĜ%DkE!B8dTZRa}.v(!fw3 O```i3# Ty/ Ĉr-0"TR0AhLeXo; 3盏Oz[w{^זm؝sD)Cbs|dҞ&G"k,4J+=&`$<(j$B?!bgxZ2]?[|FZz[VJ)+/6tAU_{CU"y6r26)*Yێ@vj[}\<"A7I:jRLGMk3ڜ=Y PW>D3O6 z"ټ᦮#9Uj5`qafxk`FҎ^Ջu.wI"[_aST],kkno\wnekȖjdQ\Ko 1xtG"pDgsQ:1KH S@JM6/A9NvobA3j'e?fԇ.fffȦ Ż4>^tti6\`v3/4”&`mT}66k EVCЖUJ4{AgA4N ߥVLMDw{fcP^Gr4~?#oۅ١Eaq=<3BW2\t9; `@! kQzT^+F/@ʓ"omlDJpy}Qi OV=cʋ; ~%BG~DkIc̏A]glb{g:+Z*,m$d]7LG۴8#?#sckT*ֵ%sUR\}JqM"]tQx;m4cӐK}oSGH —k}3&c0oML4SUn8Le^է^jHf f(|?:hKzAoJƨ.-K9&?HIE8zFgXg3?{WHr08 mWXod]l^Xnl'٘Z_%Kv{&U*t}UJ% Ooivvvઊ[*}*S\^ nāpHpH+O>R\J*pH)\pϫNO+ fઊ UVCJiW \Amp(\RR#5wRlz+oER WU`vA!wRpM*nxQK vZl^Ւ-yp6MW}~ޥ}jCuý{jen׳m\U(S0!FO^.^׻]kH-z}OٟCRّmPGb:lwu_]*_t菻(̟nnl]xi4vmn+SH Mv5 5#Lѥdns׋M׽n|ڋ L%ƀ]3O6!k'X$$T :[ EBRBȨvZXΛ`-#i`2[NIabRځV" J܆*:0ƅ& L˒Nٱ9f#, oX%]Jےbh᭱뇨c6\~Uܺb`suɝX(K>u.!bU&pZm?7t?hbwJ}Q&'Z͑%Z8:yg9e@Fj*+MhMǻƺؿvoW:jsu$@2ec&d6萬%$*vPWQurMR|.)"X Izٓc-)Ē `H>Ai'9 Vn &ID4 !s{bHOw-"g7?jRޣix[lwnͣ)f+م.7O[<{8gX~mq>lf hZxRNGNjg1Aª6q\=|BIE5ڑPt0yn*GiQْp] 㭆VwZuy˸[0q׏Mhʷԧ~?S\4#krV]k ZqrpvnʈKRW_RƢC/s&J=vv/7eWuWw/t麯UɴzڜtF3R2HZ l%!/e-8Uyk|$cq \ ɨFU[pglqgyٴ{u/:r6|=YNfطb[o+[B9"7Zo{EK_M 'm7v~TN;JxL:6ҧҠAjqd*77w #f#;wS'LT2jΘBԝsl6 UVs6NgKNVRŜB^""rHU S[F0gϝ׃ݣTzi{v }E![+j"ob:PrHbJ9ASC%C>iO`w-o$#eYs(M9Z+\P;gQMg4kxfu/[b=9][R6l`h6OUsgB]),e# Rj AkdL]vb ̀ϊW{;z\]'4dd>h6+u5RFu^%rRØ?RΆ:âMsZ"gIba,cM& d8p?#"qRM6t]ҙ8M+lc_3]LjD!gE7#%CrJ)s*8, +2 QGJ1XTE} a,rnLX1 B8sLɐȅ\JIZTwzvȸ8oէδd_\pq) $ JwT*:I%%0U?Blp9p/xؙv쉇 aWUa5i#w2[n(Eۋb)|{fMĺ>}ڜcKmL1(H9𜞿R&M$ۣ"A C K`8tVFJrRr= ,YWN GA*m D9IX)C6ze}D/Sd"$Le]JgӓqV#YifT[M%V=Ź/jn&Nmouv?铥o0ٙJ}cM0RqZ-Dq `́'IcۊlsP;-L"uطB!:sL]#pl #̡,m^wRmWC~ژr[b1.*AQm7@"!d#Cla3L 둍h`^־aD[W'WaaS7?v]8kM nvM`(n2'[ ;<C+}uZo(m͑8GΒVE]|Lu+_YaTsp f|Np\}T4/'5|RsM=6苗bb+ΡP*$M]zúUQfY";1^L'~ϝλuI+%]@uݹʶ~hnz>5Z.e&xζDH.4ѠF* S}Z@OJ|gA7^)wc[9(eIZ `&VuCp@u8`v)d A9'e“Aɲ 2OF2T^Gi3A.vҕ8#'b|iJ /[<70iYz]_ knE[RtT/Ϋ̛pυf Lq!Q:`TA&(3r!SB"R!yȼXHDX5ϜE>AO@I&YF(bLngo!)(uP JSD@$w`@!Qglu[zƛ9k}AXs~on/,V+EARaɵlAS(Λ] a4P5&*OM8kC3CI}/zhOY0g!C(]&,by-8؇B a8pAϒ'_7hPA)d% G*⊅8/X]vk2bw%4xц:lN6@.,T=;_Vtog,d bL +ȶЧS}`d^̗FRGn[^duh DS^qDVz|s/?*l~aƎ}VU60̩9d+[z(;ރtR}.mYŇh֬@ D $]$ƠFQu$aP5`0|AN6OA:A{:3INQv鶷eMr.vՇR7'^UQ`+UPQmp}^rJ0(kîK1t Kp_oLRkwzRFg/nhCSl{x<ֵm.lݼutE[yMe|HE1WW]1MF9gW5Uthyk@fS~&]mtI>^Qvv$K-m (Y16|ėEJ|PV\Hs&ϓUTJ8RЛdrS-7eQ-Zar34X)h% \E-,dcfqoytQh<)?*U1*[d*h%{/q8aFhNPƦH6g?ub7<NV}'JH?>~^LmBLet[bzyH攈e (i/``mY5  ԋ^= KZ.ٺ-dI;YaD0RXD9TAhaFs *QRGR Ї"MŪ7W3ؿȄձ iZެ3z'WՀ(igAr8dƂeW^Q4`O4^ K-|b<dZDc>$%@X#;N%0l\.yzϤ*YTX)`UYFO(RFy]FlUT"׫:KnqtWTtV_?a+ F 'Y1''eΰ[8tU z<`l 1,wӻI.C~߽~hŁѾ~(%[-@d68uVKpf*? eT O^Uz[}f[Ue]qu |ɳs31 ڧo\Pq"WMVv~^yWvoPT}6+K5wѴo߾MnbAιitCs<#9c.1:XtyÔ o'7&t>ȱ6$Qs-9S_Ã&ܶODjbnT{p=[2K\ #_mN߯TwT՞ nVg6a9:f&ˇٺI@ob!|곏aT8e'#0E6,[^rspCdЗchq“;ac<{D'gB鍊gr|4-[avJaUflB-h@=ź.X4@")k>MK}/)uxsϛHA)2\a&r#XYΤghgxcr>a@Q/3!&3"Z%sMq:*yPl·:_b6ܱݿP&8i ^Le 9`Z]㖅^<ɤd WbtqDTzr36" 'RyT=P; Ú%v4/\R {4b N"m9Uõ Z!qc $ReTo"_X^Á],T& ۭlΥ;Ib|%A#b5"VN ! 2*-@D,pυӞi.WӸ'@ Z#mtz*論/1'WZK83̱$!q#.#m3\s:7K)Hr>-NQ?SݠkL<+Fm%*|k9 nϳǧ>ϔz ?+n[_]Sc^Q*FBC/bW0[L-yx!|"!ϢwWaцŶwD_Y ? }ZU{:_1gQߛOݼ/KsARă.@(+n@ HЗ& pݻiv|sͰ_}Ҿ{P5K~mfx4X-&U<f+SK^N$ޝ}!\dqP[5*ǘ1 uQTאָ]x@~aO6q_J^c< [d٨LRP1:PZPR<t wLJ +6x\ۮRULmTre>ΠlV=}qZNPMRӆe].;Y Fon=E2G#**5mU$ƈ:LTzA=v3 dzfӖ _f |shdvxcA: ;q XXX"&Rb q1|`90ؓ^#fya~ܓVKn{9"SnP.l=Wu8:zMֹ=>R@+) ޝ::6{NZ+]ΐ&b@L:bp:y͎OX{eup{ScvS+bwQ*jDiXk㽤J), TÃvnAXRBnpQ@؄ A HxqZR5(K)µmz ȳEl7ch0Gb݂>!6"JJ^ެXS9u K2VEj(B,EJ t^98a`#!9MTn%oH0Ӎ7wC8=Sh,U~/F0&,orƨ->%Eau({K){ſr:^g?X IlgzFm3='a]pTˀߛU^+k)L@yM>jb w%`_7V&߿h:vF&LtxO bԈZS KcmKD SeY^rOz$ O|}H?WŰ NOг[8tU 0O{^t(oǰخGM&a??}F[{dHpx&d"e,R Y,v(ѓRݹH3YXvSuXWτV^~jOГS$&ǷNlX:\KW[%Hy.C[f [jסq!;DWXpJp ]ZyGUBhOW/RVG`]VѶUBYOW/(UEg .m9]%J*zztBZt0V3tp tвKW B^]qATXՈvZNW %M/DW%*2p9]Vʶ^"]I"+)2p92h%nt( ?{۶?t/B{ݻvѠmbYREɉ~PS$2e1]Աo33*QJhbF$oN$iE(o" f,fN37S!+֥f \$)%EMܢ[!ޖ&=:Pn%F4Z Ez+n]\} ;=ORJ 9jpϽs5ʦMvtuNJ2[DW+Jpl ]ZAX*lZ[GWBWDIڀE+fgO|WT5N]HQ$ZDW 0=thٹ=^ ]1M%&-7Vν~5*ztŵ$X`ٞvKQkƮsMdzt%VM k[CW .kJho:]%C]R0&J[ 3Z3!J(YGW/VlA̸)GAq ;7HƸEgikWж M?JݍN|=';ѳ!9jչW[WCKyʦ*G15trJh5o:]Jݴݏ;z"X DZDW 0㭡UB+i*THW`B `*Qk*}E%DW <ʎ^]1-+n ]%5th M㎮^ ]qnJh ]%-=גP.|t%(悷5t֨UB٩IWr &=o\%BW9lOttJQ%EAH{UKu[*wPnՋ jBViBU̼;ϰSdưҜ.EE%UzBv[bfJOVI=UQHh_o$=g%9<?E2{3 ˞vFW^^7y-H&}U}!:䆥r_^k/PO^/E$W|341wKWyC Up+-~gu-ڠb|U͆$Wo޾V'+ߥb?͆3,ܖЏG4'ܱ%Yly?-hblc~ǢP6MEEtyL2n/Gnݲ N'61b^iGjdrVk=1[,} *jO_WpV'KýKC{!,| y)Lesp?O`itJ@F.]: {S(tG8| [&<a6B裟?~>ˏ?A%_ 74%ݰpUcS.N6 e~`Wc˒ &+]l73^Uryk&.ࠓKw2z֙ ^Fzei (mzw6}GZU\Pz:W +-h6#Fyh:˵݆I?\L#}U>Ի }hMZPwaڻZXt8r۰RpY!S៟ 4+/ QCP遀[p[^`(#dfF粈$V3f2a% V]I4Z'T]?sޱm\XS_s%&la@Q/3!&3"Z%sAYyq:*9Bs5ԭ'J<9JqTQ( "tQZ)"VD h$;%ޙl- :[c-$"6z Ve7^'K}~\Yak8rB;zZjB_X ,&ե,E m({%xu 8wP0|(>:yJi%1Gì4  +M bRk!1ki3K,bѪ@P`'bS4Nl'-'09ީDnA@bsLk냊18icD 1.v aHZ5aSFcG"tDUMc FрG!eLDfq]uaEk\6>R8N_/cзu@8.VI?/A\.WLfg}k'gr+qf5AX"iB IEvV%9"u4^w%$On_&`ދy QRIy9]ҼRb!Ŝ*37ϒ>VqR]/s"/V H)ZY4ڤyjӊp uJfz¢Xfhe~Ĥ;( ESk Wc:!޿kM#_l;˲*'J%사qQ=JxYt1<_w͉9ibFcl(E'zg%N837~vH2 (4?1=7&w}bS4Z*j;u"/Ԍi5e Yiif"& Ĥq3OO̯x9Ik5Gaٝ^3cԎ\$\Eh41-bkmT)QsÅ r/^h^DE:h?MZGc*PA(,<$tsKO^#j90oŦF4|]KJZory̾Q}:+,XI#TRx! >-"L4幱V|iiL|RiBHq(W4K&*QO8KKqw$gqPZ+!!F 9 40B0 S֛  sS8 Diﮉzi$ᘖhGcJJRZ/d2!Aq!wɿZ_Aoǣ8LY5L>}rUߗ/M[;6bb8nL70w9P:ϰs})&Rf#Wfj@q7RL(xH0Pu]zl-ulՐVjNI88э]$`0i>Jζ  \{([fϿVQ,+X04L`D$4zآ??m1zҬ'=SDQY9Iyu.‱QC Gcܻ<.Qö Gqr*劜sv7y9?_@$yW}F$#,#a9sp p c䑔p}gė4HdUu=F&#~mEU]撐*sD|5N0aFLr48l! ;K8<ndB m5{6ר\qwV _f_7'he;\~mˋ`0kPa:\bbjpZV WY$z|z3:_I9=? {!rϊ+C󒳞->㪒S01˥~X5q Igȗ-S~]Y\b/ο2t'0AձU%U&" 8 Pت#G']O~o$ ^)P%,X?+)bx68E^Usm4:l?A 𠗶e HG+"U"[! Ub e67ɧ)3Spwb syY \y:%*EKܢ&wl]?q_@y@g̺4X7?r2$ϙ-۪_'|A=y<_Osr,za9O8%R1 r2c*MNBxzS왾0?33L3Nqm%lLӴwdy'\BwNQ+[N|!͒S+R5T.6  ?3.L*=L7Aڐ*T65ܑ8B).ԙ(ԙB4 +|2DyN}. HPUQPƐ@x NivǨd2(eo^Pcp[*PriG(OX;+Ҟ.X=ʭ"Et-#`˧8r+|A",܇Ḣ "O|4-ʨ ,Pl4@LVn&&fbfT5Ū1RW?pu][NJ퓃k!'];E׷Y+{rZW+7=S6 '!秒TmQ 2 %:I2Iib\E"5xό#΂ >FbLޓK)C#uL3ԾII8 QQRBd,FnXvơPG ܦoebgA_xU|)ny#~M&\>~ wO!XNCǏ?e/1<ܕ_ߟiԪ#S;.=va|QSG(=eug5G՞}^72slA!šWD|CgM;nn?E^,_lT'U-fn9׽oALO0QFwq5M^Bö3-v 5WT3WW€|XuxB囄!a, Zx0M$"fʺMJu?uaZV H*d(GDH>;ފX2&`{0ٔ# Ey+/uSמ0٨Xls ($kQ;VHsQϔ-OJy%wI>LNlP @:ȅܽȃ6YJh v&|VJZOsHaV)'N3 ]rmBݝBJʈUYh[?u5=*)/ŵ D,j[DVR S%{Ji $3;FrGǘjFמ$?5eLpanb ebz1"|4X =h< x&GELIJFE+=ƗN:UH@jSb~<5%kժAB8(0ٝ8vS΂5@ñCG/L?>:ꏦyL*wJ\UHt"XU1cx%wU$O*DOD znH Q b#ig93j@-q;)X+kRRƹt[$)}6 HJ Sk%V CbDPa㔶NMw1nX\䀘 ?! A˅:v% DGT0+caWpmsʋ|8=Jq~;jo A<}DBDUJGE#38Л{t>D?z |͖)u[_̂d`.f@*|\C{ԐO#'\i!9$kVX3k*|אd<'?&_gˑ ,) glu@g38 ްӦzsדt1]\ |*G[mIi7-xY{v].ϒȭMȺz5rYuAvV+غsi{\dG6ݺfc[v>yhz^uSٺrQ͟nXy=W{9|Gdz*G< ֜Ot`.6izb}]MIUa:KX(JdO/}w 7E0^Ji+!V&'9t&#LMnu}L2VOB]$ ?#fTzVZgGFvZᄌ^9)˦y"53G*RfD=2:cCG F9ᔕ@T9q'W_4ZH=/Q798^am[x{S/>hA劉q'єP^_8W?\\{O} xyg~ۿ7`/fʛo(S(WT? >~߅ꧡP޻^yϧ~fk?Lī+6ѡPGU|ǗN#K'>-]6PFttءYŠ j mgIWW]]p -EWZACGzt:WDW 76xte(q9U̔48z̠2Uҕ昄VDW5]\/k+CTSϑ6 mWr~67}IRgcɛWu4WgW' }|7W66"v%sܣ]?|.wGUxwI sM73HCՑl((wobTG0*i)"#.euo(|qо(b_=F0Yt".9nrsc/ksyj<Вsh cGGvGz}~[{FP>1]?ZRx%؍o]鑮;xp\]\ k+CTB.UN8Űܬ~-teh쩁/;tU<&u>ЕM+} Gu YWDW8l3$BW6ȡӕ|gHW]+q5tep{%te(q9U /+3wO~h #]=C. \42hё!]썽VCW VŵЕ  /tCW}T> }Dw\BLҚ'ӊVܤk@ˇ?j()_~}zi+ࡇ{zK'^YD+kCv'AWHWzOYS^]pb2Oa~h:]tHW˚ +˫+CP* ++. . ZC+CrgHW)^]vЕ]49MG:t)12 +%^ ]4񨮞]w$[;Ԏyqs̷/~rzj-ݬTMۮ(/~۠߾߾ .Ѵe}i;BwO:A@oڏrzzu}#ۻ&H=~\F.rwF^dlvwv,\墴v=<̉jUJ7_(l`>H黯Xޝ~nۨ}ruG/nZP|7Rdo #URGG}~D |D|k3# z#Ϡn_/\M|w.6Kl{㟮^ G fj(HDX3GruJѷ(3cJI7Ngo׷Ƅ ˵@_nN\.1s347"X'.ǾKNýzSC-}zх4"W=5H-:I8{z,cӮ>ُ <)Yt2KУ 13ИSZ-+{nHyZ5{_3̜9dɾ+r ڽ!SImGG:Å|Cq434T@#J3lyjahs01-6M@>4"e~$i)?쒦|hBhl+:WDyKZ$Z98?5jc-#p [V5n1ʘ*sn;FMAr3Z-|:r_iI>lDGS|%,GYG(}2 d6WvќfYPR AubdDHn\8U"sׇXB:|ӢuQku-7!7]NT,2cVE.=[hzg@ ^K~ng0`\QˊBhʴ lKela<'={h jtZAqiح8Ύt0HP\.PyU5vJ,dBWm2c/mtc 5(B2a3brY{mGe0\mh\%FTgζBu-$'w/CAq**e.^bN$؉~ok6p T`2ijdr!cM(p}'Ęp$$X(%ŵ'|*foBeF:uHpPPLv -i%ٕINUSU?R78(xa;&{k _6#Hb#w sf)0`o٪gfXcB?PD>|ɹO#@1; JIcX0pR Ef#t o:h N`URh̖^j#pU.M "˩ՆXf jRBm TpEKt\oHm2ndB;)*칸H0H5蟠Kl!f)vmؘWᲩ[venjyf^\{GεmKE(,0ݘ`v' [.TUcթm9ͪ蠁K"K_0cK!kM߿v $g6H^,6C. dHs\{mdʰhfd_P BoLf4yYHi.WS''|a|ig+L YulF0&4KrI oۍC\eI5Z*xBNKz#>U*4ŴږUx'P:EN1:ʓ@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 4^'50* d!TU8Wh@@_kЙ@tr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9xis__Zoů.5\?|!ͺ׋&c2.y%Ǹ"j1xPAƥ1n+Ø"B\c+5+tjrsFE$W=2"VF\Q`dou3cK{e^ke zQNAm#^@E[YfQY:]+C6ބ&}WzQ'3Z3*Bq.SiWw`|Y}/ nFd|o_]/ dvvy m'vg;uhgį~g'bRJk/E"UZs" ! s_ʺ -iJU|C%r53%EL x6Bܯ>JD١G4H)hۉhE'Ԙg+|rՍV~aÒ+A<]`G$W<B\c+nrZ\P 9 \! )!\IIE$WlF)\I ^]RHIr5BRUY\l%+{zu+4w5FB#"+E#W+]!箐R9VҰ r\!C+T4>F7 H3\΢ vrfprdQ̹FWD3B᯻BJCS+켹Xϡ MYΊUZf ]|Oա>(^c ho0 `JI\商Rw"/ je c?G3 5 D3>WMN>g;~^UwB@Ɏͽ$Ww)zu2"`'l4r"WH/WHiH(WBilDr.\T,rB]R1ʕT9\0,Bܞmi\!%xJi.HXF "tGJHF(WZ+MDrZG#Wk\,r ]RrNr5B2T;[ϛA"BZ3`) \Y&G$WFWX i\! O >3.:k|)3oe0iivt?4;^HVQx`Ga2i_7A`/hWh:"fCh^o2F;!Ԏ"WHk )I!G=0}}.zֳ\u}srՍRa WEϭJF$WlFz\lrq'9Fr5BV݀]<+čG5]P:RJ1ʕ)\0{K鎸=ؑVP IF(W ]DrF#W]!\!$W#+[J?\!FVRjj\7-1xs\!pC+`preG$W촋FF#WH (=\栧ʁ_[9В+lvtn;=g2w?H?;C{ȢYg}U'`aqUϻu=-Fu>J\ݵ9O\!j\!C+tjr iK򷢾+*IJBn5WФN~mg',YsVUח͊?U՟uU%|Cd:R.,\vSk~'e҆Tt6OsQ4g],(GdN??qO: }9+} ) ʸ'ٛQ\,B\b+}l )=:b+}n@+ڑR1ʕ3X!)3hTQGn3&͙*:;FjTc2 i ˯p{Sm /8} x_1\bp7vI]20|&m&'ٵvSYͬ쁠c7L.i:o$v?>_p^78[aB|u4Yu` U4y^J8rI gۙ$W|9lshvbSD;Y?)sP >U{Lznh[z,{n }P:i8x-|Yq9v?ۼX;m-gYkJ:/ri#ǴEs)zzuB_ےC˲AA9!ՕϙxX]/# cYt:V(%]3JJr!d~.a*.!s2pMē?to.vt їCԫly:קR)=I"wl2ڠxQl ImX%'X-)gBA<nBOI 0'n6E`90LByn#&h5朂Rp 0y2Jwfѐ-9hei ut.E0)@| $y@x+`K v =1$fA1{{K%`u<u;Q@geؚ8k-=4 mtF[06s4ѧ%A RDf IG`ocؼGJõmN^{}} $-<}\t2o䢜o8xE^*܇_"壷ˉ[COy ,"YkpCG*g:xsp>n6Wޗ7:,Cy>WY{v4_i}no!VpV6ef{׹۰CzD&{hpc[Wϟ[ۻ9,wt:oֵmV(hݿmAϋGRVz^lhKϼϘG_|BO/q]|d97gĬ͇gMNa\yJZs=}a|s5*,# ׅw .+́,dYv@V($SE QS&*KV* ڬy4Gmczz38=(b,29v!_k^ -;!3U Q%1IIqht9KKFh '()lZT6I( ""WB988S1#yÝ8KNPڦ&I){V@dB7eWqX ?*Λ^oj|ɕkWO?670}6P#(kl]d;zdjmxĬ!Z%v<~aaKJ|֥T,/(tؘe:ŔҊ/2d{~zBҠe°RrNƒstؽB6P$RhND]lSYOZbh&p*yZEJ.MF%@zv}MPC+(wt'Ry}-9IWs^PVGgi?b2y*\6:t,Td8}YЅ'"yK&R9qF NN q*AOC4)ŠNV1kRDR &$_^ӹ Ǡ﷓ gGYxG%5pfk&8.@;ӒMnS=<\{$:!q DD)DFIb$&-Rh"i+  5k(g/KhP;ƹR8/, 2:M;6BpD0(f±Mɺ - .ڶ~ ^jC \7z j\xXnb)V>eJ+E.TBSMsE*sw:&r>Q|m=7Cs? ?Xl}y3%L_9?x?^)( P_P?X_~I:B0n펾~HMXqrkUUT6:ȿw͡Mn(r~vN } ƺIƏ} ۅ.Y%ٷwg5m^a0+ʊWbNE4hKWμK_UH7nآT!pm:g*xu_7BP?0I߫Z\gϱ%eo*[y3Tוǽ=.uksCz`=sopuaH89脨 fK?ZΦ&lj"_ח ‡bRO\괪RC+CM?C_c#iʫF?O#{iXoq*"&P Bt8ksvZvKIg`<`]7.E\xGGoIк'uq9(j;h:QLJ1P]a, ^V%^DBs[ykBOdino뛉TC,AUki%<9*UMucv4)re@I2\mpQ>C짩2Fh|q{(;\FIq?% Zg\cTY~(pqҪ䜣 *PoŖjԼ}WT&q )=TZɂ x^&I .Idydڹl=<97Bm KiY2ɨ'HkPh'Z w9N^G2`IkeO&xQRGkM9Mj列5rmqtgr z*I8.MXֆo{%Mg:Uuo&~6U/RRN ',}uuv]ïA%7O^Giy3!-% REo(wOVLRpEv>":^x\(aVB[lQ۠G?YAf8A3F| @!$)HsqH$H@PNSGm߆:q2:@n7idfSJ|:FR3Vq.*+\k)T)$bjq泆*u[aބiJrNfŮ7mg<}Q/4,u~тE 5&,pr(D΢9M1fNY>^e7Ter^{y>7W^Hqgrn8`k1P]$LeJqHy&Ec\םי|)xPmpǛ5 g>Nd#F)]`6΄y&ji܄,_5<1 c~a4*\ 7Ϟ/xUիiYnի76㡿7%7{Q$'dO~/IO, *' ;4Y23\Vwr\Z|Vęv_N5pdy&d]6к*non;]N_x< H2TYs}7=T]Ub|?g7}޸:NT+w;'r]'bsJ]hݏn'ēI^#>!ڞ&v[-8k g aT,6*0/ A.e2u>E& ZYS> ^4jbr:d}h5DK^(\vNO3UgY%2èi`bb2XpTJ py !ɨ`Yjp͗:(3knHWAefvɺGA#i֎=0V0$a OIv8$ht+Wul;U L'9mnPD6V`TCHBR\b B=> qIH4]7_\9d,C6 @4D*Ш.h+ )bm\S@픑Ro=8oj-DO$G0DR )[U|k/4{"z?1$Iz͋jLacguO! /ɬxH']o" xh0p)5 2aPN阜^h9MnN3ijo0&XV*3@yU^՝k+^H?S(Ës4չXzz;ߏquz t$oj}j| F$PvLE CEZWE߯.$Mo]r~L{q6~~m֝bz%}q9޾۵0׳zե^ _B~$l|?jcZ7  G5F8ؓ勞>՟}pJl|*#겐H~)eK0&ouFWgF{Û7U`o?/?ͫ߾?߾2,ݍ1>#}M'.'2hL^B"S;"JX!Y*qVu+ ҐRxa:宩z0gj2E|.Xӓ0*Q+ UfP}UGp7OjH%qI -I#VȣĽŴ>ЦsRd*-D{N8@6.M{1?/hFZ,Y-!(fU1C,XC8lg׬ 1f R)`:fD,NQQR jy*1x* U,T •",= &ioLb?rA:ʉMq|"ֻ%ĘdxJhBB8SN(KI'h4{7< x pJ%Q3u؎IX8SAݘljGøbڍYSyxe:r)^`'\YX57T&E(1ᑅ4,IDBS h`ML$( DŽ:RF N5='ܘyXOƶ1x*MD4;"~x @\b!BJ- V(!g%2ov D(- 6NRpj|2rGH*)\ "i&4w[&bcrx=pqZ-W:r6EqgYd Es‹u,hiӡN%'Wi6^ mbs|€)'Rdaril_p18GjrLS*w_(ǬRAS+n%Yj̃tY4+G ke)'5Sըu-^»n>:\ Tr.\p:\=x*  V\\mJv:F\1+|դ\Zz UZqŹ+l P Zv\JyWG+ 7%yW XR j\CMUrmyET]`-ǻBWRWLj+%W \`AB+ToB WZ(K Єbpry1 Dv:B\I))L]\Y̺+PI+T2ʮoa6wtKe +EUe9vlH~ 9 Ҳ UI^E|.v(؛;SiV'l%WJWBL*ι.k] OJoSx!UR*vl2gY,F, YYI`*Q+LdU\vg.mjJQKZMj)` s2Y&|:d拞!܈ۂƋQ.+f#p#T)I|5޲)z/5M=0vR`vSZ6wpԪ1% ze1B\Zz\J;\!RRʂp r^ Pm\pu<*N N\\*KB WBsbtA\bpr-WV*{n*puZQ]P;fv{#wSuq*puR* ֌+ˋ Q"mԢJ.EI \\EeBt:B\iKء\\`=| ;v0H(X\SL j9mT1RiKZY5)gb~PmW֠ʵJǕٲ)j7У5갸I-#n*Y˼+LV= \a[ɋU+Pk)o;P%W"c>V P4 jv\JK:\! wl)W(gVWRw:F\Ib / W r+KY)BWRWG+E \`P-Wv漢+T)D#ĕ˒fA\\VL0jeQpu2ԒƮ@0=b+TkEq*ח[t:"\}m0[ >((cN~:s]`5"w[N&݌w-W%d|pb[Sh3U"lwSрJE4@( ]Ūga'Gq'č5P/F0Tztgw~J(hR`>UM8yPF{FXw4PNja~Y-g׃!|onsT; 7糉C[}e}%6לqoltM ~Hk"-nX++&'  ՛|36\[d4􇩚{vpv /cTV6+ݡ~+SvG*}ںTm6u|pʱ0/{'!rEp8_n`F%Q\ ( cZ C(ESV;wd}ub${H̒1yĝ#,9xԁz{wpRpPNU}׾%BP `u9y[/gX*-_rsKցqbSrruP<;MMf weZ`VvVn?p%#"Nx es*EZebs"LW -WY''-'z%jF=eQ 1N9h'y $kѬat1-<%`NL^հ7$3>tP(͘}na£s:>Í:K-JR<,=vf3{SVpǸ;@Ma7!qHF}q;}_SGّBMCWXHQld! 5 h0hSʪP Eh?!MˑkkJ/Gn^6oG&̯@4~_id Aؗ/_c.a)+h%\4.j Rʼ@fT3r?κM )aԳ.haA{'ah2+ qt ( e򭣌S/Zz.>S00KamWNevofs )|c3{3;6tw+ " R%)(PH(U]Fmt'^t!á 㺖LJg/P{{a]p8%N` }Q*YUlp4uEUD܉ \0 wV# U}{@U,ROũz7n1Yoчc6Jj%D.5m*$jT#IEq]#T:ν?ꋭQdNx81Ox̝)DmN}.x_;w*eCdx5ek'h:{ءA]!)S_tٸFĝ</o cH#H|%&NKMH"YR-!GDѳ sM(ó=T3l[{vRmMcY#jЫ>lT 0<04Io,6kJ'o$B{s߄ŏgɓYbs1l.6E?.T4gxKN͒\@V([Gc +NHgN]flR:FN!ڬU($u2{olclQ' ʴtZ.Ye4CvcHƱhMAti6l [Mk !0BOwftȄ$c+3:YJ6pf?ʚiݙݚ HVZF>x] zxJ0 B){HWoɈƶ #LX $Y2i dFihj9NFPZJ=ŔM%N˰mV)m46 F&8S8M1yA sFtLP!J(#9&>){.ޮ~Za:<{ @dUZb%ѥE}֣7 Ue%0W(cT%o@"h!L4DeInZ)kV,A,Ĭe!g aQ~jQ֞wN/ =m=}A&4eq @'x0Ϳz1R?eϰ 8N6`gM,nO=JFbѬ0EJ SNFd9t[n YHj4}g$q.gU'i<"/hata6C( d[1vM].^ s0[RR9=>y[f6ͦſ[[: }w~ߐw-֘\(I9SHxZ ƾ_OuO Ws4N`RNWWS[@V6Wyňk4 ؁)%.q99kP0ְ.,ynA.kz`qo/]ߗiV Ya uUT+TH.G&<4|`:LOęP'}r bhkCsg/aT_ʭ# Spi s45ֵnh߾9Us`)D˔`}>cgÏ̖%eѲ+ZгvciGg86JP'TI&׫:ǿ_%W'Ek?דk3{3pTWnvCj)K*;˶V=[sŊW`,~ >ͤ[elܖ;`D)C/-f2_d ~g>Mpv#Xn}J=JFש'M+uuRU}gm]Am'.g[?y, =X|RX3ә~{>_nnf5'Йy;b>/zya(O_5k=gYf&3ͨ9>(rgԞX\Ф|2o#lWݩ0\vl;ƭi, s-̭Q-{9Ԏ #{Ũ?˸|ӆ7o|);,BR5.ȼ4pLıԃn?EA/:olZG\7 cZMzPփ0>q?q9usY^v좚 ONjP?&I6st%vO/?RQ\`y $ ROi8Ԟ.%;e'.n棳N"K},f8ڄypXZL;ށfGu Μg̶y }f÷lpFLb<7djY}Ӧ_O8vu}|+WqQ/l˨>@`x,X[uEYgs qS煹M{ww /lue2u9:+@cPwLzDrtr{9#{X珌o b.RPf>x;{{ܺʭnjfH*YejI[$˒l|$H"%Pxs tC?>'λ]C ܠSSb=8~U| 5{RcU:˱`9N@"B"|鰺F=![BP@ŭe 5Ga`Vg#z\+h{)ËO`oS*h9Abֵ 7+hC2 U2ͺiWr]7"XI-3o ˷ip |sG[xa:o &Rpi>%PwK SQZ k~WMWB:ޖ6׮|]^y8GvW6VzQ0] Wkw`_곕%&z;9Ppÿ Ðn꽦v28,۷ߨ2{XqlXGy),P^ѰDTd,X YwhdFc0u>.JQ' ByĴ[[=oxєz#Xc2'6b7| 2"wA>{ڒP> BXA>>+Ȕ (y2/{:M2c#>q\?ʸ usBa`B %GA.ָh5|Xo6 y0 cL:W><, @*=Oo#˒+l3,0|F~ʶ뗥hc(')Ij =$cFp@)W0PFH]43G4&C͕M1@C &xRBKgǔj7'do5 PI8uwZz9#W9ǿ}H wk\Jr֯0D=\om/| 9#-8.-&*U?F ֯jI97eGkڮ&7ߡerDlCbS;ٗ=^ߏosz13/F%fwCj: Rk>,AKXښ6<=c4ŕtȰT a‹A#{OD|HmC|H>2mJ=ǐc/dgͽ !he>Գq9=@ ިg}:6/˦amL ^I sul$M/gP"pk[0˃_̩.*|̗ \9-hf&Ǭ8yT"?/wtn4YϽ@6Ҋ0 H$ $D߻\j+m{QDR7/r%ɵ%'r>ҋBN"ƲDYݔe4ɑso[ Ȗ% 0Ie+M.zJBg&OAOk.7/.dJpR-om 𙘐ˊZnjv('oԇx)TraswM(犀q)|(5;X}m+‹jfB@TzJ R0LtOS #"XJCf9ML)ǼzyBbdUQbx&X.P}_ٸ0F+ I8GON]b@x} /&;2RNyeEGS/lX&QٶȇnMwLGrf^쫨/r6u`HM{3Lbyk²ʍJLPkNdKǗ*og~. Dn'oJATsZxyfʘ4ܲʈ/ˆ؇VFA^5 A> _k`b`$n\ !) ah9QՁn[wyZ%8QRR]%C/8uҰgVeCG(F$g[.̷YA}AWNQqTbS{rv"VXưgÎQ֑ٹ!+Q[wKδf9Ʊd$Pš`=;md `R,V>S-{=ClUV.+,286`yRDeF^xԩ8W(eDqr% %ԐZpV򍘥hJQI L1հH s8 |o΀Zm?>RBc} `T~R3ZLhHeNTK䗚L, ]jưe0L811U$K$EngΈ&z B('jsSb4h@[Lhh^#~0a?v UD"eJe0 Ny[и2K~|}):oSJP6 nX'%Iᅧ,O%'/BHKI.9N%-{\[ 9{Yw(݇$r!sʞqd)^͝8ZN hT5MUh[$Nj_խ%4=֤8|%, .o::VR6m#r ۫ dj~tQw v$ώY&j vWx kKПIm,{>Ie:(eP aa455o*mZcZevCVO^(4Π#\Zmj!>ˌ(u \ҤtZ>, GcvZ/ZfҫTCȗX*/YgrP5 sX!}Żq^ ELYDSN||K"o▼*ڈ61:LhF8#D<0j^Mq A!Y_j;N%DҌ YPcA|CH9b'RJՂ~Y'\#Woz%xB ѧuT=Uqh6^z3Ζ] @1ȧ1Fe}Xd?G[~VP\\8Tp)C.CXDCe *eQd釓@8MfT Ia5(ץ`ÂH]үA53,O~v }B 'ؖܭEIzA0F1}.vaغn)ez&^pjT dW $aT" r ,5~ ٧!]V=-?7<+3>~4(C|*ౠ?\kZgجd<0ʲ$;VD;=z4c"K8D&CY-{Pg z*TV+ޮs0#K2SN $&Q`Y͕/dꩠJ7(`^Z[E;_ ϰȻ6g!WίE  +}ZnfQ.m9J1d>낈?bya+?0^1'^HPg(21J `-=j4̺Ӽc?EJ1Pm(fHfZp3)3Ԣ+ Um.𻽫$ 2(1a/%GU$M|g~DˀD{4bXOjCIh{duMmS (cDXA*̋bݟ(}^<, '$ `PiF$2=yR%|}"c$^Уd%d~h x,q):=Ba"7H\ gH&0nqW X=s%(0Irb=fAQ]d}&hN$Aye/ P8zQ/ Hg7CFpFLd{|f0dS<'0YK{D ݝwu{mb=*4ŋ9* \`t`v+)c:%BՇ6Bq/‹p!i.ьR!msE-V ( 0ɋ I˒sQY@ zxaQYuQ϶{>"*܈8FMσ >]DHJ/mt_aS ׹O@$!L}XN5fǩZWFJA.2@07:+2DZ8|~#I,=>hc|Јl@g9f'H‚vh N8kpXq!2tQ:pИ&cJEK{սm 5K پͲqlN4H/U,|mX}hP9yR*{JBr6cv: 'Z=N[dZl| fiLDYH ITucZ7}~Cb8QЪeōȌ&uL0Elш΅b.5" 3*"f^nmUyLptUy;e3f3 'Ӳ͙#pA؜P9$cP2c@i'mFvvf'l> 4=`|;mٲ$G;_R'%=|" mYUUbUBD9!5\}5sbٺ{gIi 7' Ew߮%ݏ1' 8!呡p*uN`'4ö]47Hغܶ>%Gurߋk/~e23d53Z75}8@7t1닿O ^U685U<;2ݱn [e{R17L!ߠ= W?-Rv {›-0NC .'a۠e%0JgtzfXaܻJ`cb!!0\b?_uZMuspu^ιQ4 wEs9 ͅseᾼ^_SȍpD(˕GH ¥9RϝV0C>8Pqf7#zQj͹@gaF:*H+MXDy:N"\Ƹ87?tg"Q[3UD1^+^֖ڜd=\eE{yܔRd<7Y典">kl=<B wCЀ<;jWx?e^ !g`M^Mzd̃nYny=!}#SAjJ6X\\.]1kI-E;RyٌY !{4 829q/wYkꪈ\ H]]1*k4}&Q/θ0St4χlѲ(o:7uhuCGȭF1X/cZX9ly\<"4EP)587 jp~[?'n 繘-0ޡ з3gUՎ*w;U6AUg')| B(i!m>燕Biat*(ZцG?Լg#}xa3ނ!ZI$bJ: mp1Bnd(\-oY3B80oFOw ')*@"LHY a* lr2ܔAkn+n;t(do+az*Fw'$D!d`"ë[܌̳<Įp1jᄟ^zcB{ &,8e ۫F(괼@¿ 3JgdղMzAm2C'u2|Z8Ivbחk.( ־wqE Q(XXG1YQLU0؟DQoAэ2F{V{PI3^+l~{[4{띏b"F%[  PDSD=dDhNl6kXEsH6-iP c2D ϝ&ԫޝ1uqv)Uǯ v&Ŕ!CӇG>_bO/vi@6<+ҜC<ՁR;rd@ơrPQ`:?j~ v2i*\G\U&]%w|]u~бcաj?9hڋfg|SX@N7+ w`4RIF+ [\8iiXEÇ0& %`pw,Wah8H`+&qH6h?ͯz7Ypu>Հ*Avde2Sorfêci+Rx_bҎ >x B`WNJZa_qFf^_`eS,bItˍ}]7~S.JrŵˮɿꌻXGEݱ6̜xk;C00a{4iFJ~ak 7D5A}$X4I)IN0E$"*P$J3ebZ.Sȿ]̶C绁 F];6w{ 1z׼dgOFL|@CL ?NiB]C a^u6mgL.m牤ڋ6$*y&U z N{j롄sv`lVS) CbiLL *B[WMŸ;?im׮v:ca鬺f4{o,;Qw0ꎿv~nHԬo:~f<VWME>h~8|p.0Ɋ겶ob˪.E/!q'Y?mz]-aTLӯIXHh;\g%1w1"򘎨.0V=R!JJf S+YNn9 ]O$2zD/?Ϯb hcw(Ώrܧ}u(Vy+x9.YmsB˂g~ V HrҼ) aF8-^*dꨰu1X <`WmhJ+F3p m[e0FHBuF4mlBl~θ\@diUU{G֩&4=G(e1#NkQ1)׺~]}kWږO.1D衰茠R*% 1J@)qfw9Y_B:*F©ڹnn*TrK%5_( cPRa]졤f -لɍ*6Z6ͥS Rh yR9H3&DyV,oQWIAjACéJ%ls )|B4ͳ{j!1m6je&T=-zqvpaͧo[35?Bf7ط PI}=$۔R9aR*xH6ŵ]sj!B gd} t)u|4طSMp(_aTy$$NAњFO*AT)>ݡ4&lYNQ"ɍ'Q~kcWƚwW[RD( ƮAwm~!VΠ6xH:#vg&L:3]|q嶡\۟d_{oo>ˬaj`{gouOX"s?{w nJdfSf̆,y`,0ڴHPd? #r$.mNK?ɩxUzdf++ \֔qWggV۵}N<$1L[I6xddk8i\t҇OdenH>G;-rݠv2rk. F)cG] *+8Aǣb_2CyFS"wEۯQsb z?ùwҋa EwUwd=5;)$8<~%E^5fwX:ʦOg狞t JQT[pU9Qq+y_Yw y[K2JyQy((ǞrBq!@A:M2H D@^_&U qN6ꛯت,+=o]HjzP" @"{.ӜB*zQt uMic >2olc yfx,)iZA˭0Ęܱ !D4lBٶr%lч}x6z$B8{ߔIThSv~Ш^cTWDYmҿQiaM2wd6wzW(IihIJA$ɰ).8[BZb -Υ L/~bBp}6졫%IR0 FVIErOH jgz<"=K.p /߀N&c+`we#Ak_g{df.&5͒{JmjT43Avz1ģL5-ViJDؠ{XLZ'<-_!;!a|-BӺɃ4m=bHUoE'#>o6)_N=9.s:e:È(8<-!0OK Iw-Rr=d7._{#$&vBb54i~=+a;bnA툐nBHH:`"l:~x|#1ڀF$  н UT ]t4X|'Bq/XN%I",4^EuRaK#Z4_q\[+'N Jpp~ 3G!iNnn [I9\L)Sf4AyH$*A-[R +7\7zň=e+*)[ 524KrTZkg+<{sF)X&!އF:*)ELrPi 2`iD2 Ҽ,&Gƈ-|j)˞uvҸisEEr("[kMo.F]E?[aPh;N@ojPBRppi[T7MU#_ 9vhf.^py#J4yTi ,X}11vΓrOQz7Z-PUyS{G٣1ؕ2b)e{x2ޕe1` wxNDHtDgq2 LKpWodhΝNǧVvt|U(tQH?l܉NJVԭr&m^d\1͚ieϱLJR:yNl<;:+y(q"nLLQ b2vo[%!5.W+-/&'ˤٞnGf zxF֢ݰ(ulo6 ݰXy]0.A"K:.+u=/IAw]Yr T2:Mݻ@) ImI(Gw N5*9ɲë| ?>xam@있LuzxEm \SJբڛ[*?D1.d9%S7CvL|h{..j:@fNϻcF1_ ]X8O O^ Ac'|5Ρh| L40mMa_.eU` GD(5lp_?@%a"wk /><9Ԡ~ɲV) :ڄM|||Q#e_}0` i_ßC n_~ -zլ`8l~t͢ "ʼi(`{ S@\yvE򕁎< xEpvEy}۾arQ9t rmzze'&GMLvݭ LLnk-I(l`IDOW_ OQᕸV5Bd^V{ /VuOf`V2*. (jsF ;-\Lq c_M}z >C8*lyXKg<9;(.E3Sd_>a`;]~bQX5*7e4M*"A3u&I$aȊp!&ULFu dbG_ !~@HfGev_daΒ1_% 2__ 5\_!r163Sj|,1o5˄+ea2D8#323T c޲4i뿀o}~Q9݊WEb*¹~iSPmDNp5Y  &d9i%y4-;,Zfsvbʆc U \)LCfOyl/ȐZ!| J=f)s%kkB[V@xK>`=E1g Z4 O2oFH96uR+NuTF-MrzJF(9$Q_0kGcD,x\Y8]:kd0emibRbni72A%[8gUɤ F#s8H= ^an#46$fjt)4gj>g1{NtxSZ3B2pLidA[>#Z9!34y,Y1{]XP5`XeBR kF[ (OYFOCk8We9̢DžoȈ:GeW͉QD x)epCcOWg2@O&XH|Xު#ԛ5.6; ä,KpFV(Xqϴň(k`_ 6(jr5|WKG9`w Ù !fc K,3x_PU@T<|PpuT\%pZ &>HJ`k(E)lJlF *72+͒h[Y X`ḽɞNü/{e(Վ"@63%M )jz`(BƉ# \*Az\ {aI({PX&#RA;>aFƋ-~`4%%H5gdA. N 9EWOF_PIϮi1!PA J7o%koyDE0 dt-lth%Z v;Uf f\,#Lڋ;Rafh!V氆I_\"Y}b1mz~m%Ÿ6=mMf7E{i޴.MU\qL0}v= %m^POڲgn6s@9]x9i4 u ާp+a2UpvQKDJhD&P{ckOU1lA-t*эŔ6fDNףZoF#>ZW`vDH75[A_sg}G!cU bB?" wNC#eei D % D P|zz[RJ^ 9#-?bȑ]zqC}R #m6&)i%J#<)2 Iqq^w'Bx!|hcD9,`m(gd@cw.=-%ք7aT@Ƈ'3T¦lj&R`7 k@gҦ8<~/aFܢp֢2 vo@m/E%nny=JÜFvEu3̕< ɋj+:L[?ů!s*/ѽpTɹ=ƓO"Q홱?l&%A$jFRu)G3bV[)*S iW*~큗zqќ|T[ݷQ>1=2:N '풵D}Օ/Yc{ל[ә:e5'6kv#@ A ؊?w*dBga^`v* |d5EA˙YuޘIQo8`F6B- e`cG$n۫5cw䠌v>v<5c51q;{\޷a͙g{Gflx,^EA_ G@[Y5;e\ PYJ(5y`(K/X niLr#3~ԯ K;Su9i+&'P [b,#'ơ/9r=E q?+Yeۙ"MpWtt#mgzmw4W^*ѩD&a / &d ᕕ;z=ik8-͇b7/8>T5Lu.l wBK dxd{nFELcugr*[^GS8PoDu31꽚bW,ޕ؜3HsɊ@I\Wߑ\ sFX=XcMkcujmreH.WdT*gjrsU I=X7uQfLW{:J8̽Y ǭhĆR3(`ٺzE !ݶ݅=uO>]Ifl>6$qQGyLƉS],_6 fȆ\ΓLl"b'\Ż?!!)mI!|<9DMYDPrъGBd.4n?iչmȽFe5Ep:)pyŲQͻV[GNַ5.'$p}{: ~OOnzŏYLGi*aHhnc4Zev O5u[mb9EpYRgG@ː7XUb;rpei!%?r(FBd^*{Q me_6ӧ>y/dln_8Y1n>ow֘v !&&@f=MkH8Okt5̫wACIXllDiJs7-[_}uXx!NE*}W~F>!9l2ع?opL-I| /ZdgݻhGЦuXGX2q[H>["B4Kj? R`B`Ncɬ41ăaYx Cv+iZ*?gͧ=ȗA.= ))y Z1)+͚6 ||= gqzIóxK y?/z7g8ϧ*hE0.Au0i7j4T6F=JQOk&>yZ m$fTLTg1E:_B9$&*tj`wpO j' qln4 ]̮xxr,6Ʒdڂ:)nNlh6_}s8d%Tc'c-̺{B2~5m#k/n9 W]͓l:V& w| g)qk7%bER?Wvj8ۣ{R Qi)gƟ,W_'x$YGiBss`'pL,rA3NpqZfvn X4{(\!ze‘.&8O@ Mp+v N=,d-( N㗓61`G`NKJV? 0[sno5FkbP9 k8'FirA )9Ž23ku0>WLgVNⰴ p๧J Nc.pI򤘗a㔗 6wQ%LQ^Vj >G<=q.a6mOvM2NH}YZd0H1祖[KpL)a($ ZZ P: X$`hTj IkcyP4iBӄ64)Ыsn^Sbף9HuQ QXJpGQt"JW`PzQH .I±wLpr>a?+gNOYRROwxP!ZS%Qؼ)KE-JF@x!M57氐l pfr) 'CdNS ;9 ! Rꄌjy '"'Me3e5D\^ӎAFf#T`#n02MQ9몕QE_Ȧ(}ĢmD,( C3#-@kj"Jщ}{ӏ˭B\<hwG8#/~-wu^XO,ۖHK@Mé/vB`K\rY2ٰEW4@zr}m:k)&pgþ-4M 7eBuJ[Qpf]a:P| :4Ȕ^4З;2sc$<|dwwhcٌy˖K.|VҔK$422@%&҈&ikT#IRhRU6R[4!Bod CD,P,4%8'r2P !8+<ҏj HMT6]u|6(kv|N?R2.ĺ!2Y@D9jRMrc|YOrËْK=ARe@N\TdzO{f$l$Ց9IOsE)ck!AS]Bz W͵8D܄uZDIWlxvm{!ctQ[%= Si@s1M֠Gޑ롷2^(E{!'ZSV=~Z:sk̦Zc8/Pf*H 'ԪT RH\ͿV0(J&?;`dEO7nFR4H?R4p/@a JN1$`j`5"@^YWC9 jTB7rzJ,=^=PyY$j`T wn2kq><5ڴu'Qt7Vm`mL P,=΢jcU+読ʲl^A—.n"5'mZLIO?MƔ t blA\`I=;(a~̣GUH!'Ј^-Sߦܞ](k50Ec3-2 sI}9Kj^w3;ꦜj`8̰ZTWcm]+Iuxu]0)4c2 /rTnO&x5ue2!B}r{ګa7i}R_T|[]/~P@^*4՜ߘ%50Bb.m2@aRuG?/w%'B| :[{ģoFq_S&${A #xWX^(U-; 6>n<<ձժoI֕ySQ ?##;+)#<_dYoA:"Ք?QW?f/p4: eWgIzVl韫5[.f~Bl`n֕vrֆ&qr@Ǘ ^1d#2yaףp,W3 ޞxߩ+a\`>@sM|5 i5t%G$ADDCY9P ʔw%E%i(dtȼ,BGhy7T-wLM>HQ, ZJҙ * zIMvJ_}ה&VNCc=%%; 'B܄` mr4 N6Snּp ^M'_7Oo/}^?(o;$@ 1Q1LjԬ)`\ >!`)ћ >5{tb'OW{%5MV؊{L:(+Y)!wmmE:)'3fdZI7r>$%tWof~l4vvYG^(6؆1TwC}Dڼ7c :DdoR8o'\ w5m\Kj`l8-T3~ `ow옌:? \4K( M9]20 W] VmφLռw.mM z1<&_wY,6z\u*^S#6b ^ x+;>g1Т{ Nt, ݜкqnhl t'HJ($q.%%EN4႒<|b;f\/?irDO CxrGQn@LJ;"@Mx嫕P<@5 {"cZs},F4E:|%%cT"Ij`ˍzl5* U(*ZYz{]6 T>>pVh-Ȩɛ%$V(65Xۋ" ?q9ާd?>o'z/i~U8p!R~QBfĜv,|e8+DX.=(;V' TJ5& ϓAj` ߻ w.{ZK[j:'sSN%LQpd5p- ̆VETދ(hz@,h)z(|Z@+H6<}bxA,&Ʒ {_ <֬ lipgk[{6ZEq"?{OǑ|즙gD=ٞcl`HE(q>bTX`̸̈JFx.ɾMM-ٓ8iӃ&k>4~/:kмjYb3H˙@@g76j_0RriͶ{mQU"`s0Ŵn`KdC_s%y6`i+0BNQYMq묾ggs3\uּ?m@h{ O^3W}Gq`DP*;w}9btα hr!$ }EN)]/%,@)c^KhZxF]>Rؚ0x:D R2iYsIywc[^lLl6>nZms =, &`BH|09Jxʂ%e̲06se @}(>4+-+W`焪ԕp19=ւu0cDCΰH%;(ͱo7ZxJΖgluat,zV652:[7I &sͨ0^ά4Bq$/Q ;`an,l"6_vx c`SGq#Й5Oj͟G|Jp3x)w|!_0^8.V)93q)UӁooZLo2uVQ27̊ e<AڞiEi0)}mVWv 4Xmʶ@dt/#>v ypJXvcØưYph>N:A!EG_$^(a@0< _6 KUڐⴜT#^: ICFUȡIk䕾 =$Vc%z+{7`jIj R_ȁ  m7KR¿'F4?qtޣm`N -HzcΧ|ZixJtYbLHS;aK7`zVf[6r{a U!0ݱ󏱃 K--xZC&͉AHze:lKkZW,.1&[C>p`81#d9ˏlXGXQ 4]}3j̈3b7 D[c_ӹX[Iel?$X|×gu'㯻̣O5~r3Zk4:`}v#S8x ~6R(S0KyEʤ>s _x変7SRrL~t6l:꾚>TԮIb's9*?:v-NPBI5Oz6ʛ_hVˋ\N]o|b)NNf˧#Kor-SaSRl$X_q'=~zp{~=-2.OO+~?}hǾ>?,Wz{R䷯g$I~]I=Go Y(g9 E6LF/oVoo_.yJ .a`ȇ'}~ݥ%څֳe(7LIIcɻ-I 7\q򱑧tIؗ| b=q/B= QMꔏHBk$%e cJM>&F@=a`r)Z-K꺨>4NR pV?N*ewvJdAhiphOpj~rCC'. 'KgT"͆NKN\ggojayִc@$/ >7mVpo<-KqjUt<ӓH10, aт>0 0!P攮%IoY7kF[+G/{ڍ tK!Bw-AX!ޥܦ%C2j%Ӆt- Ɋ3 7sF1x-Sf袑N=mhȃj= IfsQџ> ZɊWQ wTf9C3ղѷ"SG{Iﱯɣ-(oQG{h}GW䮆?ZZ+5N~/oys?Xqg nC*`Ȯ/.Oo)EsnV0ǹfB A{fJ?r]u'^Q :+XmhF BZ TR<^F#38RŇg"I*!0gg@Vr~Pd xtX7gumJ26]3V9˦nVޯYTqJV&ӿ¤L*ĐY*lɉt3gY(rʒ 5Y=.g g>;"Rl[@RkꄵGA%ѴqA+zz=t%悜6G#l+* |Jk2-U-O {r.^IzarE(cӕׁ}z/ҀPτ4.n|W|&T73PȔ2t,VBII]+H?C5]ml%XGmЋNV>F즸$ 5q`!WXR0fN[Z4A)]E87YXٝЋX__bͻ/}gC-ٽ 5,W{7uHIqwԮyHOV@6CzVq`SԮz׺FKȒ]{B%}Y6:卋Ij1^Nk+߇vZc $!bɴzxD>IhX]rH[:"gSt9' `uFWНɪ^CT`<YIu:i]m#R3s,,!ֵeP΃Hw5g Χ2yGD ρ#L:z.Ρkm8ߡҴſ-~qsPnӓq^M-Ғa&bK$ SÝF7ЧD\g'a$~u] rI덱""/'?AmPmYڃ=@:Iޟ ݖVܼ-?G^DriȞO+^m"FL!׼{Pz2XPUM/IH5Aj$; VJ0Wj_Yl%sY%)qN dHfYJ8Er\NHOZ!qY$zD AReQp I %+"|PtQ`fR[pQ!I\/^=_Se+ UVꖬOtrn{)E'+IlCs VPh>\) 3h^QM{&85˵1C;iG3^ k0k3@Ć(I^fhff}b?J.mG|$;ʢB¨*Sz tՎȈ˃\w rubFͮqeve0hd8-s}N)tr(UBW s:ωZ0fn~dc0qU[!w]pxqxv9=\~I*9Oc69-@ړ&1Ms}2Z{́vs;'b;9 +F#d<[i"ܘA+1oswӅ{7!4~}7jR y)OeKKz\Vh>_"恟?8#[~ } )]3񌞺h.%rB2ɄCT̿#83Og]&_Χ%u7ֶWĒD387NxR [F.ܡvc!9 5:vV;@>h+1nnkRnˆ'UW%K+n@Xaaрx`W`@(QX!䐀j4 F'l H@myK֕ >zw#\tg1(hD!GJ8 j'x%`]uD]tdɝLårZ.ex~tJ,`ӵ%IDP=NRЈp4ɪhf> VzaUTv]s9nWT~I- U]-u{ȦTZe6=XhZ+YG3&HXuևy$GPaZ8s4Dt:t ѴuU*&0Des4m7VJDsWB8)yN/0Tɱ(1lSG͂uB^Bź՜HŞQ 6 caPm^`$O(~/.l"| $ĒPE*f195PD\MzghZb)Ţs%zjѦ1׫mu%X\&e{C*vi[a)3Ah}4m/B#b;Ut: 7k ^SLSxIO0|C=jj{dk+Pza{ 9sv1o & ̭DIHXM(΄&> v;7\9Pk_T_ӫ~t &ڼoß^e>;Ei͗]k׫LGz@ߊ>/k~#M%'Q?|:nt==}~T=RfR0ŅMaUͬjԾ@U=OV@1$V >f n>={LXv5Ev5P}W|'}R<$~vKťG- @$<~Z3?XWR.+9l7z䃲Y旇% rRo`'9ukvW_U#L޻:/*eWtĪn}8"f.>S,;}ߠ|BL}&~aZ1#yag﹔wN?CsS] kAD9n:K/N0<{}Cf>*{9@mZTmFX/s& =wЧ'QhJ5 [^BfWq#ɑd2N yt~7 I}vZh aOI!pLU9 7BƊ'2𙄏7$pF%pyƏq$|&w>]"avO1-3j1slx;ݘ} dT%Lנ#V@kGl( 1}bC7󫃋W*/4~56.g'WoOOOVjޘ-oA;^̒7Z4]^]x.rPw_.\M)eͷ_:$8w=XFaS4|:omxȞNt3b\VS%/ĊYgvSU?>^SGCy1ĜpR&=:_.>PڏJ;6O >#0[e߮tD*r./~yPoG fЖ`O4s0cJšN}Ui` f`cأNfѷy4Rd(DS Mg1uT biY '>IE.=w!Ne;ORu`A@ G1VI*%V1:B%fΫ3Tb)z)&dU,+6Ek[hR8i]1!,b"MHY!"oP4Q lL@Uq/ cIe%kV/GQ yZ;FӢopR%2"U`U/1VV ֌UIQ:ڴ̮)MENQ;oɨ]cVOb#߰IWl[ƆR\[U;gKMf$, 3EmHj9% G)PK1J^hMՎx["2_X*ͣ 睭Ǣ!a;HEłu`R .Ta.lBU\짪3VQ]y Y= J(3EO\Kpj`p)9OnmDiftHz0'SE[9%僠`5!zlm 5 )j-U; U& ^חM"[-' 9kz g퓕kv}OI(p%^!v2"t7Wsf.NhjH;Vbf2a5&p닿΋~ׯ-?.O?_W&e[NuL " &dhM*`>dm~P@ɫlՄ /cœ5$8e}zF6H|J:oz]ݭMb+Kx$q?e77-f3_J_6th/hAܺeB+\.˿\\eV= ֮v]/LQ9%}tЫt~Yv |;mG2v=x8 9VOf j !Ⱥ6L"l ʃMQ1PƨZ2r4M`n?s{p4sѹ#j+|',$dȁw]F`WFw#Z#!eea]/!!e.ۏn)XD8sqr:-T=vhCIQ-.JZT/),f%ĈJ0`:O>$Q sk[愃U(79 jLD$QqsE6GN֠.l>[S"/4.D"19ףKzoU9IAs`sE[a39aLͮ^1*nRkJb.T)EBoqLM S`&UZLJ&e LΊ)bT*;)`LFYOT Zj~2 IX9Jr`#.f h s5E'WOf!iW1=~vu]+!Ws cơ ;v*mN)5yp.F*TJJQ! !E0'Jptwr\iɣ7FJ=?*UoКk:We>;" kP;2޾b9v\Hb!ZE-pMvhE CF[{M,i.q4vpS7`-,%TxW?%4|7afPM cpfwUZO&:{6ݕ 5MZE `l?&ZJr;FB7iurzq/{0募Oec@\p2o h}36d獁yc`̾ q`duYbEa"tn 9"i$w17E| dW'ㇶbVԡ"W𽝚&>_l]|S p|`i)=qtxZ\r#Y9_"ɯtib2YCnzsAAL>O~4i(Jier- >9 ~``}ܷ},},:b,#+Yv}/(D{o% i=iR]A?3 17pPiTrm8O/>N@B Xosj+Ary1e(\6u:8~mRȑshsm7;?{Ҋo`m[acan9{ Yхrre,xҜ'wKvRɿ]yX;/$w mQY)6K4[FU_-2< ^lS/;)]/f7[GbN©dIDw %ݓά[A;'Oe^`ZvhCGD_%$O4>8A8x!-8,Uv|5g= \q6q?GhŋtVe,W5O@ᚪB%pWc|*kmφI*t +9#Z̛$~~[%;34 5iG1zcXk4s.vV5h9fEL x_bM99Si+Nm./& >ѕ c sc{38NX0Kv\ gJO1D#ϫ@ cFR쁨2 Ԑ#y1tTflD9%-\yI_C]tk`̃'IFo'%Z!? 'ZWۥ "xɞX 0 `~&D^ڵO}@ggH}5Mu8VaL=€lUÚÀ$C:`E K hR`C]eWh0^ƅXIa Wh<&FTUgGCbN/w5]Vp38Ր\)kDpG#?+gf:v"-,,d@f"`oKzg.|v7VR_[Gk .J"WFbr[aU^yPH/QTFWfrD katXnI36݆o/+.xwa x 4<h `"sgJ^Dc-@PUOU?;޸kgFN^̜9Mr7m!eUAIhiiMr*KZhK A:M%jjH VAa$)~OXQeV-t/N1Ľ{ង(b<3+DT\?lG"Pݍ1A˩ m*Em(_%&}ܦT(yM:^Az촥zO1zzoy&p4ZKԅ_ҏp̔ Eے>&V]^릴=X5Ğz4f $CGBε&]D/n),"uaE/7hTG</퐂 irDϔ<%CkUou|'c5χd@39dz% `MtJt~>y>5k0#x\+aGJze?y1o0x@ȶqN{A@S/zB{@a'A\Ig|;1QsqW 5I͹NK .]M^ |/"̀P=3ѽٌ4n [!>{ lo[BL,ZZnm^V [v$:Al~8*V޾{{1+>\\I]&PF+Dx:ViYYj^C)JYi[ ۘd+O)+)<[CQg_}?i6!(r x<k.jeaD'v|i:VxvGZ0D3ίo Fni3w5 ڶ6c=@rs;" ;Q٣< 9CaB kyמg⺙{%8^hgH9u\@]]=^FwS7osQN2Mѝ40sCaUZjȳ=n4ԬjF}TCy !qW褳mtS=M^r)1(wǛ%8d3MIYrb7ZNuU 0dfG'66hA ̍}MIfCPmk#*l v:-zMy1̀3!d"lalV,}ܹ`F9(`L v(w?jIZz"ȳ/|4ld0M0z+l^1IYby޼~uo{D픻S;̴<[u/.!`Њ?Xx>DlW8b4x|9vWcm6rBˋvg+1?%1.81Ƴ yz9Ơla)%?ŮJ),~x~厰4螃t$ٻ6r$W>F@"tD&v6v{؈LlMۿ~,d&UY,_>Rmp_ ~ \pF<#޷ XGLjtĽ<2> P;\;:%̩vvD(aRtlM,CN Mo> Q? !~h<^d 11Rw0,tK l(&ɐ'pPC;zvTIxDfhS?29HعA i6̙;EPuΈBVh5 )`R]DQNTBjUN}~gUך#6N'/m_|}cb@B=ĭL.W->M.F%dBע)J$Ʀ0տ6)I.86힧DSxJ8WXNMu=.%W|j-j7f1S%^"^kwߛSR:cu\`R9Q-S㧔J06-'19sgzPelKLhNDΪJJB@'qɮIY-tb5:}5x(9lQ錂ѺZ3TQ/5v{A_J>zYה4O,p~θcTOmZ܄ Q*T {#k*YʤF'ˮ/]5!(/V%Xe ȸ:rE+5$ ԄCm*i",K J'*{Ӗ8PEPfwO`vTLs`Pg7t~PNt{':bdq#;A}OԎSc`Sc??;[}1ןŋח|vdPj7_>>yâ37nl9u>DY֯Eukկ3t gOљzﱅ݃Ͽ~UQzO;tT/J,[HzRQ띞]~g/ď/_Wq?Z};=` C AEq}qց3kG#dG qj5w5|sbNReû~jGg/]$IO;SRއ=4= 0h$MR7")UX<2H&/32H,h⻌Mg])E"¡S5_ a:~ pc-#5!@bE.Ͱ$Qdy){*x[ 6%WgӲ@,Pd-i5VZQ x%w0]q=IXB̥e@T`8ʊ)UswҵlN"I* ~Ooj ⪭Q29LAn1$hyN<3/08 2%WJ+x5F="X&o߅Q]~4ˋu>n,њA,11hVcy" V`,P07JLEx%B׽xO(]JQ^ 6dkÎ$#LLR]qZЈI^Zhg'>7;dZ2OFbɑ*'&sj?|c+:p>KwNX #gtCvv㰺XH̤:c: }<&:;[~rGiQ +T Z D娿3JP1S*0WиsJtNGB'زၦ;H 3)A9|ƙҧnVk(#)׭/-ODd9k=P [Q<6b˫1ma I9l/D&P';%b.I<6!݀QRچkܘ Q,&þ}qWc怡L+E5%u\mS$ jFuY#V]bPLX]EX3<63`z t}Fz .#n-63J7bVh<~i{OI4)lx{ʲ%R|+)MBkl#c͹ҙvxs9swWa-T7p `kɫ0I3IgnhS%J0@f54$c<@GMܥD*=r}˞i[l"R2z5*yPӵT]4̋k̷U {r?vRml:p3WhU߹[sHwqCkNt>Qʖd;vRzY{~}.~Աi~Y_^L&IV/}qqh%ˆ\6,ƾ Gn6»ՀsđjgR򐪲(,^} : }uAf;\Z_0|EM|wq u0)b4MQڎ/Kߦ4W|<Œ[ϥcݦ|,oVWŀ^tq_,~ճ5`RӎL!ZU^qxS* I$'Vm%{"1b '-\GWhph5Fv+Wk+DW4a7LmiIɐ r+,UUʢm >zLT50 - GƯkQ~ (?j/@%NE&LucJwͰFZU[h86~IYQrqR\C,L?opg!CywR;EU21TZĂA#*cW.b㰢JH\J-բ:d친']Oݮ,p(rZR,%eFYYbF@eal٧dZ؞1 !f`nZ R?L =?;rBۥ!2[z,bM3F#X>sBBG]Vl6,u0)RZJKIU5Gq<$#DJG@m 1uV<2wD={q0̮CAɤj6CDP^56LZi)FKkz%*.Xb/*A!t#\&$x BnL6ྐྵaV]^<(# `5߿?\^~, iJkJ,hn[wx体 T=ٰ,vkqؚ07d\</4+(./ [w2)LXi<'4F L+&EM{j3Yo^FAрw" a$¹%t<,f-8>%b4=$8WL cCNrD *™#'F.hͽɚEJ{JhL5!kMJo׼H gֺ9aKʭpڈ4\` ؁oFFM*F<YUp_s[<PݫEEwSn"w4*e[f&Db#$b |hCp9͢ H`E*m?X0 'mpi4fծ$Mߪ\9*bڹ-HݾeVt"'34UE=>-5JDkf@EC7y8:5@`?]K][5vZ,u-m:PLukҽp,MwG}IE}a("3m9N x'Go/n =<~"gX黻W ׅV=˛÷8z?xw *vK X^](Β~Kbv/yOmZl5YCzVlWki/K6_?7)E_x8`'.H=ej..uc#j>X d jYƌ>f<8" EHZԯ* Wl'@$;\ŒtZ]AΆ>N(NO!L% !<1pCOy0n|ǰ'w"c 1?O(gv.(KۉF΃hPB.mp9)+F-Z`>//F*tiS` 7:@ʹV!'KU jM5-YܮVGU+„2k#^G!Ahé`&2d Q)%MV)kZJ`z*# d*p-Fi2Dje%kuI)V(mT,L5QnUT s >p ce+y@ P H0R9_su(`>YdS1]E"?W#r}$\aZ2Ti=vE0c$ ,l+92D&T\%j8jAHΘ0,3 "dLQTEś S*LCA*$Vdd ?!nц>N(8ݍQ)׃+i{ʱQsf#;b5'(gD ?{Wƭ09'}Þ׹6 1fm%IGح؛Fc!ΨE5Y_ť>!I"#&Jt$WZ!%.GGN1͝<]JVde;2SWǝxl,"bkFG Dik*0%oC3N];JrcÒ} Mh&D['$ r%Yw*jzUàu9\ ;#!m][] ɹgU ʼn\PZyokFsĹ6yo41S .V炼LemO QʱxD61ssmtw4M~ '=.ʠe?m^SbrTMC.Smn]ͭKu6T t0 c׊3ݽa  9SZ Ø*QiN_RG_tne)kjsnEY!&86wlZL5O\N ]yT~S2JR)/Wn*׭[u\U)sqh(w^(cMt"QhlUaJII(M8$mu+eu+n ^.盼r;ko*eY*AEj58l9B녶bP0Na6*{d3{][3(MF(3R8HHhډˠX@( *rgxu$Cf}b)DfiH^;UZV$~nEpF4F,J$&1'0`p18sшGbD5Q,m #k03w7Nu"pp(FNO٬0`)#֙hpFXM85 3bCe2 CwLєe# xZq5QK X̄U1q¬#tLPRN-<& 6YVL$ObA xj\ * -2@ޕv:<GCY^ʫ.\PJo i3dt?4g!Z(.!HE0$B`R.`~ ^lIxڒrL,aWeYE[VUJeY vb腍LgH6!Lce9JbوYCPc[ksxK0s!.N{ Є hO?B9ۡZROhu9[.c*6>IY~'llU/ˡ]3iK\Gbs[18b.䓩UGb?UR ޾Vu.[ٯvr Q];!^eLJ]Y ދTT‡RQJ@]푀ZJg{.Bb$&"`B+[w(qv7yi7vpQ)5VӍ$d(Z0f#2܏7)-鏪BUqq*?jͽ;@m@bG$̡ʋG>e1}*Ӻƴ1]lLWŽ+)${ &Vg ’x[+]`<># IL[`uPs(4W]9Erb[ܺ6ɇ5$F`5+QPrF#&UL2F& Љ 0B |"'[Q-򡺕WT$VX xVUN Jm89 Ib3AKkbñ'KFz&ӿxYo,E΃17WEaJPz*NsS-e,sq%q J aB%0&$H 8FV0+*bD$N+KD$J[NxH0ek5dbк 3D&G{ uJhw 3gEQ*DGdz R|&0nS 4 T݄\j~n˹c+R a{7KA%qL(R 6< POiC`Kx MpƳ@jH.\T,dX Ol$o&4xB NwpHWީTzkejDldp)ܥ ٌKr%ݦALxuIܗ,7۲FY|Jni쥌)\j99{8.UGXӎ7a(֘Qk`eϏrig~ |(eAUHp.80aӸ{ .<;{^{]y$UO/-Z^:?zi3T0Ǐ_`{=u`1u e& "'c+'e3]#ҺV p& *pU2ە -93VZ|O{PJU`v\h $ƲmUAF,i)$OGnQZ݌wAF@tV`J(Àq&[`F62ckU{a37k'Y}.;jvo$pJj=u|Ue:ga)Ҟ8aϬ ߒ]QwV]9mm-xӀvUpx}kiySEynO*JT)d$1:_ڝPdӍ7??hr!bDjOK*1Bm1; mgmg]%l0KG^'CKCM:msR~lUwXU}8 QoQuu+ޢSxBPe0l +C&l8~6RAl`r7ؔ`h;A*}rDm~.t0TڱSWgPU #F20YTUbܻGT=UVD [@o|!?DR;Vdp2VJ%N`#j'Fʴ%ADR@vH5NbC.2&#:Q*БI0l|s9FU `ڕp[ ĚOb3q&eʡIe+@t8sɄå~wbx^mI0of|ۯ枾k9>g= s)·7銲s3J8N2uR*Nb6eSp2mQ2g3|J i>v  ;YH_o\eu-tNm6]쯞$`ۏU $*.)PJ8y~&v1.Yw> : M!NW칥"]Y 0?@=yWxI8 8)gA{pol~l0zĵcInx ۅʞxr<HD=ޙ9UU733Ȭ+}O0`M‟IU~,&ڌ&P.,}<<ލg_F&Ɓ__~9 Wv.c8{~?=nzau>vIW 3|eqgrQ.lZ.vl;YjOZF XiDuL4"F5D5l81oap::-eWwVd鿞W+=x ŬQ*鮊ml.jUROydPXe!T)/{nj8b-*9oMlW~i\s.B"|?WFFXqaf6ult&6z \rcsyh,4QدoD*)׏~ח ~8_?lKA^X!dTHE0obaV?KΙ|"ee2CuiNRܥJvk/38{Eӻjkݭ{VsM )|е(F(:[L1n%Z#v\88֫HK{\>_}}?l"1Lr.k0wd&u1So~Єֲ?ѰsXQw(EClmVZĩB`R$e\i3}.B9&Bu\5BL1?vWYmW/?!ۈ/O֬ A˂\дЇbE ee$J잺Ddo IJt.ӲtۺA? X?ZJ^lmc1@Xwۥmb(7Hƺ[qZyhhfuw fbs+xOi<=U:E*\޸E3t"i;n~#Հڿ;uF)I[t?k IDk|BA^2{ _+ oMfIj.z1*,@w1* k،o )`j4zN>FT]ňw׬x+݋6l[]'r ԙXK9:xS-5(c@]Q݋Ts7ʸԕ{w.,J2*:l&?H\9ɛ?'S 1"k7%%wBj@wGeT2az@lK1.4!E 'bi!lYj޹k$ 9/̩#  s UJ=ZhVZ}F(!w*JrqnO:ArT6AЛP5=5Y~CyQI^To&Wߧ,5hZ$I嫣y3w8Y˱ًXSaWf1G}Kߤy*IM$%(J}Dどm$RTFWj%J8^T_9W #˭VTnG˭Vr[˭iY>okRAF΄fD/J j62WXkJ fi՚Nڏ?ccS88?+|V?8AG ( (%&?JC`՟?9au`x)=2M)Y)A[uڨ2Ng+Ta0&_̇WJI>!4 E;?NN i:!:M'De FH5HnH)*u4z!*e)Q0^:1gB,Iw@'T5v6DʺN0s9HtYh=<7y#{P[fや)L3,"D u,Ӡ DL\KHJÂH\3u k" <""14Q D<~;8W ݱPM[Qq\c,=u״51[pRN _J]AQYVhDQ!2׸"G֣.BP()*Y:N ! 4釋ҕn=FنLޔy}JBJeBEYw$L&KĻh)W̔3N9#ItmxeHI=7%xGZV*pݨti7 /)e|d KM& &4 p4$)DK*7H L- %#v# 轩p˧uY!Sؚ : ѷ0/!茄=[o *G:>DLgd1Qˠ5+Jw8`q4>#,X ~6&3K^kg<_ ODh 1t"){ӿ56t5l +tJ'Me"ѼQ?P҃` %Ŗɒ_;jFۍq NЀ0-A-RQb 6Q[+HG2{PkۑvA0g=DݩMYr}V\7̀ |׶dFc%mrQ z#0*hAxIp`K50F:Ҋ2YP}L8!r2&X W M.K8ep>PK%,%عP#j#mևV-1ӂo(d!ѷk^`艣Jʖ,_FJ#h+*JtF*,7c9 FNvJ&0kSσVƫWa46L"Wdfs{/i#2$%u&I]T]hs׷wx hC߰7lȠW?c|zuSE>S.^,a+IH_BN_ϜwKC%~{qϧ WDӦɧWfLA)1%GL'; :-DP i̓',Ơ$ϟhx.ȩԀ6M* peTnMTT4 6Q*l$0K*ꘄF6f;nSS!)^`7Q0U'Urx>~03Wae ALYC{3 2 \(dyf$4Lj/ui-֋b]]ʼnJE z=;q)H'=uj/<bSm_ 0d< 4g)O )m$AJW@cUm=U!(f GrBPBdh^֩}֊ƠP]'W&H-9Ut`56ĥ'%22o'yr .4-Dy$SR(RY .U; * AmaqLoӅh?~}mEfz?>q< nV2(y?3w8me%/^_\re(-A``eN4+RKj3SAhY/HIʵVC0V8A{`d#4" onfɆmn_M& 4d_%>ۙ^Ng7ڲu' q ˴"؈dpLRU"p+Od3tj>wq_!) 2>/! `N|zHy=reYzwɝ=왞cEI> rgg~utUO{]z.|Ġry2 }\Oo&…<[۳+VzqQ_Ј_O ڃr7ӛA4ڜ>»ÊOyI= SiL }3Z^8ojtf?;.̫)XS 2^ׇ5aPI)~tVu'N"ib2^[<ݗ/OzǕ)n< ^0&D/0g.3SDMX⸙ڼ1@~2ECqNƝv2Vijbؙx&ɞAc/t#ȕZJ9mTGҕ!zU9V7 }71˼% 7Q{9* Z 2+ ~ h&t oXdu90*iTDSC-1 ~ #^m; (꒦R9FAߦxQF[(jk )8shFZJ(7J5hКp_UkFf=8S p ~+FFh;P)t8-nP"HY#uA%H5\|¯$&\m?B2Qg\8[&"S4 ֎fOҌ@FUL3 )w[SzE$h5 `sO)B M 3>'G9Ha05)$p}i=YMP Q '@D.0DҚpn,OhA9{…4 /2kRGo;;h!|8 ?,Q>:S|0%_})W-œw%oAHX{7uÐANvؕbOvU& Lhh(ҥ+@(9UUgCfKGhz>LB4cVX^g-Zpqw#UN9kT:]~P Z`0ވ'T˛w>Ԭ>),t~/}' 9n$kZ+Wlj6M Xm5`G?UH_]e@zg{Y_=9/cz2NLvK{MLPj@2n<; GIQŪ⣁T Yo @ՊK_{`D"^8 N8g8D,YBh|tYUjZiAQ3qwYI "4$s #,w`FB;B7&͢g(d-ŀMFYp" :̘k5 NKI<0Fi< IBV9pYTYZDf (aFu#j$քQH E.WIn"Yr31@W!xfEkTJ/C9mN 5D DU䵐A6IT΄SR`XZ :-%F1SB+i h8N5:pZ" 셼y t\ͨ]pj.Pu\ITht0^¾FiRa42D biN(90 mQcq\)Ž6φ!ahA ʼí<+Qh;]>$pa_͊4C+ kÉbl?,֚ 0<;,4PsPR:+T4;uHdbZN.';nc5o-0U"3yTфp2W]:dyJ i@eZWRHZ5fgd]?jt@,20eY>^8X"F*ɡ#r1091ή 3GC~ veNZѠ$^שYG=~)\Ҕ`ʴY5ꡨJIl$*B*v6ög`o: RLtoӃ#m'{zG>6R~vΦ㠥W@jU)tq}F;-nT)FyaemwŒkC11<0T%fU*ݿp-ZPsFRJ WNQvk Il)Uf|5ߛ֧pvyN¦'j:qK6'M<[j} )C!!B"Q(!ŅS)`2mWB,İЭ%^k ot츿]vz 2*;Qisto'*`'@KO'{#Kw~R: H]^2+h@M<[F@Iu).y檑_9t=䬾:ۜ<>zx޼/irO5o~=o>\$ G1iޡ|yp}nnWD/Oƣ@6̶Χ]ߥ][1r3nʽl_+ׇI.E2{ x#8Ѻb#:bf44m'fZ!$ "quwFVA>u3H9YbBsh2DkkWZKuD9kuI9*.Cu`4a4p9sh)%H)fI_*\IqzZ|6_o( F , 2C=*THӤR 2E;o{TP^eNrjK #-S=r#-A C0NbuwsH~~JHvT;}``yOO jSץ&NVU ],R:iZ^;㝊1ZIQ8FsB\Y7>MDY*H[>t"J(@:D+Wu pYڣZVƇP՚rΉDCI'PU/`фZ-^j"%TA"3"+[oHNv*V&$F[O~jhgt)qFP9YmY|;|"Z$SHu+q9 4UMOfWpU^p=UZDzN8s)8̐b6*$L7'"lTҴQ1z ^1&%dҪQzW}Ę8H:[0*5#gi쟣hI4- 'H-}FsG~s3s!$ 2ekP'85!Ĭ[0huIvwa糋e+sN=r5ncS~|jӻu˫Fǫs4Q !_k~yg82u(ezGcRQ>ڋi4s]_]y֋]|A$謿e __+w>)=fu?ls6Ͻe~mwvovƥIR^7m,YI3ݒoސnYF}(i$i;E%K60R^nnKgս-]uuOET xsN 1[+ 1PX@MQĮ۬_Q?z[}AVͤz# 6Gf Uv {+KOΟ! aS/|O0Ť5!$iOI-}3XL+0B7TПPpIcqŻӷfNqɶ4VS *TE]6,&5~P=^Ž# _0ǔOeH[%UtwR9NYZw/W$'^˗xRN }f AM`T߆5\Uv˪&a_#a"ujs'ɾ;QԌĘwm_Y˹E|? Cii0KnDT=p%Kom\AK$g~̐3lj2Iʻyއ1+ΞrdSg#:"-o⭛£O Չׂ28I6 ^N~]ry߶gYX?W՘a1J./uWPa$^Lح~ 2_p ;o0y.k0,'Q(*v+J\,Q(DJz#Tc%3x_J"Fd& yjަ\[_:-Q'U,*egn4tVk򭜑%Ӛ,k *Ž-`&l<Ր>|؟} h2 }U.~i%%а[i4~XAA5 B[o\u;bW;b{}5U5Ey(ҭ_ml5QDձP*Pc)ט.2x=q$Us1GUT}ٷW %3X2dcf1: DANP*PDs䤐R+Ha"g 5e.L5/KcW0YDVP8D @PG,)CTFXk+-$1Zsݔ5(@#E6%'IS6/ZZw0r |W|Ӂ/ⷍ(Rh2?~j%7O2,n)y,_ 9`AD[zE'x˫x5/ӛ`u;8@>Nwf:?O3Zrpn .N⚭%L]&1RD7)`eu>bxrtPe( TB%G> G/,*Tk6 1+?U[#4'yBi$QanWWnvAʆV+ZE1 "4QNWG n1wuyc,JY\8E#9S l(N$\&IY[&z;lrr]0$|&ޛYv|+=|N5KA`:W O9/.hae y`,g<{T#g:ל+Ip`?>D~`h8!CJ9Zjl, uFjNЩS**pgd_Isa1*kk2NimIK_h-t@;v/1u6 ܇>D&iKKMcCcA1W&V(יc `3cXbhb%>6ɜ,)b=U5$ %>ޛ}np, Pl8Ù{{)L/s3ã]=eZ0Sǿ a`e lg.nW]@VHݭIzu$U1P1 gU9,1V>? ;7Mۧ(.i`>^0E} kټ>c nxWE"hӈ5w9o@ρ`XHnPoxzG:ؘruLA9M=k_ _@J b]Ez_a8OjiDǑΞK+{]@zc@#Nϳ΋W1ciZ3sZE%t>by@wږeƸ8Tg9t$5Bp})ae83̰ow:w"w ?'vfkSsLx6'Q f7i޼{@e{?|nHk9$O^+gJ@^X"(UZR*Y|ˁ2 YSNHԧ'5 `ɗ,*=AIj!Sx=L䢦Kyj*g3ܠ͵5r0q UŲuBs$DT+WpW#QCrm9lTiŻ*m7l#ʄ``.|߫1P>dLr4jmz6u6AeW5%XFz S}kDDyid$D[CDFQS0aJH& FU UBZ2T/R _P"P͆ Ac<6Ě1c1S*HBmY%1X?̣LO)y${Jbg-k.$ F53xNYd 寓2ˏ&忂|y*Ylfb0[ZHƘ! t,OBpIQ`iv))HjBORMOi2,f_nbJqRekPW'դ% U{ 84WAx!'O,_#뮟;scfjpe-#U^؄d6FP}YdײqZfY`)F43vF0l<o>~~1rt8ZQ&0Q 'Qp]vBP Hfxt; d?v3Rf6F%tBFp:iFut\J-&Y'nuz(ipv>}|A>VFdenrrQ&=TJuY%Xlqٹ/n}b0/B,> ]B0T`%NI6h,VLtm:B‰GFXj8hSzhU&4vCB"ZE.V.QSa=)ޣs-9[qc7JwNͩ'ʂu) GsM *眍gnsFR#zlF\j no*Ղ;V&#\D6h;W )syZ͵\ ^osB֞aj}Us4>zHOf$⭛?P}CaJj^ML]@;cAF,u*{S [IkbxƑvp8 Iنآ8{ofMqS~`\[nTU◮I(FSFF Dn!OnBI`A\f]~j`ӴC S j&/5єEg l燏)!KY~B1h+Qh(`1C9)<<Ǔ?O "Kk\ޣfecl.Q47v\fSiܫ-ƸiwzȊ9h-sh]<:QW*UjŦJTʮ])j_eon9#~gwkR︊<߶G.In" 0?_E}>BwKߩb:ޮ~ ,h~¯O{&w~=ɲxDsgeƎK;X -"D b"!,2TX1<5oS0Wԥa8[GΪvt,C w ẾyiA4y\-bTY4w"gC燎qv/a8X4|1Ί]@/le!s-(.UX'8HuXbLUpLL#"RjW¬ܮJ(qpFy̅3(Q6qQBCTr40$" FUA9)R$ qh$tbRu"`X)+TȨʨĺa&IK9k(~xºTX-بbE#C0hq$JߧBD%$B >[t«Wg=>ޥBFx'3eח\w\.@D*S~<-A1My-03cx".8~yy3 u ;3}-|8ǶB3,8[<\K8 X3 0,5.撏b |e7c<w77kkc=ŨR&K F`&!q4{n#c(tH 1FZTB\|4Y~ߔ(mb 4pk?ķ&YA43~.,l) co{hWvTR[~Ӯ|٤kKN,)rpocɡ*}p"SPRNdw!:}jh,]!iBX(H-;Jւ.BQTec|9/'s$1It+ e! t'*ܛPq/I9^?9:8 } iCXfN)qn>@D%1:īf>>h?F, Ԗ ˵[3 v f@Dz}M/}4v&/ Id}+`|@"|xgE5\ n/OՒ;XMU—֘AϭpSF {*d}>ve/ݰ2s)T'0$`~6_mXpassO}M!E[;g::7vԅKΙMR 8|,WE}ޯ;)'/O [/ ϚqzuD ,Y2Y,e-jiwk?ZY)w,50FI2o`.bv9Ӈ6H/b#BKw(Z/B#Fs[fu|DdET GTb`gdk }16WF OX䋬J?1xD8S&mG)/6O S !IkGd:V2OװlK詎U[nhDpNw;mnͤG(6>uqYHVCnѯl涣c2Hp,>/.7ZYXnFqlջA%Sk\ͤ!d ^rRB&jցwOߔʹGUR۸ ;݈@z<;nu;nO[8ϫ;ܘi}?_;|*<~ 77?g{x]Sn!Pa~'JIhuvN {Z`R/zϡ@4v6ExCӟ^쭵sN}'9lbe͙juS8ۺpmmheۙY7آ$LS~ l9> ۯ|Hf3?ncckRQ+M wz:$]ߨ>I-ҵfsZ ϙ=ܘ̭$c|dZCGh,?vA?77T=^qy}翛/\I=(S XX:j~v?lӺϼ>}ow7)/j?_xnުpƈKW_ZWݾ ˟{~2 J[>ݏvS 01vrd(PQBm5&k#.*⸹4$Y[!~}_~==7=h,䆵;-2/_<\/٦Me|0SJXmLE˗~bNj]{pQP]:)F151@" 0lhCMətQ*5=<(Vbk*(h6[ Ar ̤ eԬ+G, ) q+ja|LAg ŢSbV FZE:pAGdms-)ͨ`nj9EbٸIkT}(l%XL(Ey b7T*PQVԷ"pd9):rq$4kH-Vb)9Z8[_2;B̠J[LYuYIhrhs#s@kpzW@g $Ue]j>3i8j&1A}cj6J$ ˒A,- J+g9-LXuM {+g\ot/2p߶6J'nbDZyr-$Ԝ Ϣ !g/ Q"Teuc$'/ZJcm"Hl-LXu|q4„֘ ̷Tkxmkl!ld!4%Q|(U )ht.\0$gk%6.pCEnAoTQ8MZ 8H3[wWU$AfDh_hjg:'5tjbcY%Kr@s|1A=\RHun j\.Φk+*MYblG\.D #FU u:VCR![^QU4lbK%4I&PU:Ƣ6% ʥ:i,g7 vJ7nSDY7%f+b`j|AA { &j][@{ <4 eloˮvWf .K5!ѷyXtąõ LVɴ  -`H[oTL.+9fumV ( GKEl^kJ2(G'l \t5TM@$IQe} oPh\'IǑG!h(S5L xn6AZ`tvn'ɤڂZ`

(P o>icr`_c+XCeiXAI!j(xJA}.QQ&%MHOC=K#4 xU< =etF$HF)^IbkhPRRR]w-Yw9~Wo r!3%Fie"2XoHc!gjC}4+aMQ])u?ɩ~LsI Mp4('ºsk/؇}E' AF U(JiCw:L>ø?tvLbOƓK%v uף{P>$h>@'R @uQʮ9-pdC}2Ԭ*0+#h`9yL Ρ&icsÆ Ԝ@(@#UGX2o)dC"H^ⴱ8P\l l(kvkKw< om  ~uPu~(~T}#~}B mjBeRI;>m hvot[.NQ}EJ`d>^{@#AKzv}ʑ۠s  Q/AwAK|&\0- 7Bc,=Jp-ѧEK ٖTi:L Bp5NKC9#'FrlCj214~  :AȚR\'/C`Qm Rg(HDZ'`AJQy㠈ȱpRX(Ұn*)njtТgTS(!c 9QbTlV |L=>8ԳptNjτ Pf%u#Q4U p/GoۤP]%F , ?EﲺW 0LDž\r/m0z>yĥݴ|+|vt(2Ilс7669 4I> iõs0J0쾇4NdFEj5' i%A'ՒGkN4S1k5 z`Q-ɗ|lDd23RBv pmszڳjcPfB4}ԟqe@PҪ«`Pڀ@xxXBݧO,#,~xs>brV / }?h+#WRy[]la'aփyle<4|0XֺMʘ,goO*$Sȉhc¸9IR_=I᤭5ҝ>9om媢/QxG^<_?- ,n%ܺ!S ):uSmk/h Ȟql.vK+^+>lluG>,ˌk^<M ^]yX_Y-!spWɜyO=ro iWİR/SWw|i=O47* Q_Do__lQ} {vzDvҍnEn<&hd3!ѓM570\Ijp}K#S4k"h[4}bV2Ի -&qbboim*A ?P"w;Wڡ_ƛإXZ:_ Z H ZfV#\V[xŜk~Q̱έ˽KG%.&vX]8ta(׫.9NH7D[lזd`hpNZb)&›h)"D&[K*&V9EZ3R1 =xeS{/m5|}繺V M~Zx28i& '˓!V7?Kn{THo&?["tzq WfxZe^f]j7zʹ}x>&{#ЧRDݺoׁg]TwyM.P |갑1" A1Ez]t#`x#? m6B|!:( ;j 63M2>5mpCE1" A1j_.Qbm ^n>)=2mmCtPLm-9zm ^pmpCE1" !1%I 63 >B"{jtmmCVL-q¶ J[5@]H,&dzb[HP[$ut-mO]gǧ'v"=#CsX|՝s;L|=)~dF=ʿgeUX⟏UPۀ?* /O3ݝFͲW]O~#N9L1_v'Ûb?V?;pz䥙K-rʧMߟ.7dmLei ,T)$Y2p\歠Ht}Y,ujɞ$-pˢqH?u^v|tİ6G)vrcdZ0=y-lM;?(mʃ?ΏAזC2|nMѯܽc9;۫#rwmzǣ3zT{R9[9F=s~Z$.v͑mE8#r!5S ARV=Ji+%Y);NL:A($*R5Uc\*Tf.:x\]mwF^)Ȕ&^nY"[7Vק-pywZeD_ltg'ݎ7ra:7=j;y 0#_=yzSf}טڔTWZ[ xSyrvwIE!.ƭ/7ݺ$3 |qCz܉v_7m9v)%'= cG|_8^<ޛ,՛+o}ޔ[zSQKKo|wӛVV+cv lUƌv ]4cKnp~TgKkʳA&!<ڛ=!Hȉ*7BFfMx%HfXkQ)F=2Q6L}#D>;>q PI$C귖<#G11uR0 ,=E$cN!F/ThFJªuMDe{kY(ЮЪ-^?DOaqUGL%t!k+)RF ,(ڙMM&d#@]CV!VJs..9 j&j&lJhVke~>{m$bd!لPhT6H#, 6 8b&o0XqAXZ2 VOBXRmb`4>r~vz6]$M)9몌DRsEEFSM[GsKi +Vj}AD`zCvaC5VzV YBP$\9 61 Yu1 FP3 Y֢4\ ga$:fULY0NwV$ &m3j?e F ç@dmn@fzTAbm*4_3Ap-a4CZjّ9&h-[,v!mF@PR10'55F9W-UB$Nl\>~KF֣JFt^R3 P4O-9k U!R%B1%X@60B.+I1X%)HLV^A luN0]MɤXۑy$@(@;.2R!/( =6 {Ya@ XYG5XD@+I[Z"H2"3r<^c1f#v7a +0\J%8kn7,׸"3i_LV!J!(ʜ4)0GRF5$!YH`xū;!N)BvD;3$";'ֈj{TR}YRrֈ ugыvW r!#Dj"N1H ȲZY Lm6 amJ5+nk#&n6Xʋq)mڷGr|dQ|i3vNRGi ϛ^~L+e3EiH]pp_TeКnujGMۀ82΍dA X?:*qgD!5S"貒A";o ƛhvppHθcTD0c\Cj,6ִgCo ( 0ԀQw`҅MvhC$*ᒷ]/[1 ׷cjTP P֘ QYD]Pu B4 da5 *$#nH{3*g4 b-ꊠF2; ,z6n IT5 jphvA8׌`t7|QiA,@aPB^"o+(69Nxyh% KŹvzA,B1:!SAAJ@RGdiSɊ c T߸Ya&T c"($+)UtrQp 9f?$/(V2 -b{BTE HMjb'b`,TuLcpwe=n[6sGݵ/pd`&d nYn_)j$R*.E'q,RwVgA\E13mPLL=jdkZB[1R;7FRse|NXb1XFabݙ8bIm,x 1TX@\8bƠTDiBF܌'`iwLp,f>O. S1)d` x ̼+ ߻c[-Ӿ n-onB#Wn]umysK;VqKS6<:0_lӅʸ55tW x'v%x,5."J Dbzr0EFZ@ql5X)B%ҸQ FHy Ρ~EauwƯiW<;CN[.~k;Z(/~ ;鈓ʁzd-g'ZŊcp8462jG8T4@͢v}wȑ:DK_ך5s}S`.QSμxəi6"7 21=]'&;mJ ֝H|XGiB"zZn0+]VN4\5"b9A**w2q^h }, sq,xKooڛߍ]_=dE7d W'*ԂuXG Ndf)$OY10#PʞH+VD2BAUM6&EH4!e|zOj`1&u'e5wUzԊ>P Y >=źj\qzXHukO(a0vbo|Uf` f;0lFY]$P^ךRkbZo%.J>6̺qSGϒs\74eVb)<0r_8{݀Mo~BC?4˯4N~nYOD[MsrKGW],K z%u|hYZa_O`Nl4]fkl[.K?qf/5&{tXNmݶ-N< 7=z,]~agw~_kNvr5+ޤsϲYMt7hin׼MI}h?ƫh{ĝ]\-ܳ<sdege;xu+^#<|I^zVʼ޿zX+ZxV֡^WWV׶t|w.ߨmF!˿WCp..Wŵ\گ-y׫~s_kbQMF/qINyL@:x<4ncw! A PR)ds1Vĩ['\þAQNha&_IS\`K('h ֥wMmq4w3÷u1ع=kھ;hzo |@ r8H9*駭XJ{#+9b zH"T `.ń7H} 2Oӻ< W%J>N}-ȭtts4d0?G4[<Og]|lݚŲ9 :NI*=鑷H:<+E~U(GK`msLsUHéwٖUa)Hq-sbh5`f¤C2_#*G><Cn|]989Lp"9غ.3 Hq. @ɧD()ݿ̀}kYsEP~>cue"hiV䬯."Ld$>#@SP:Sm6kx LvB3x@NrRd8\:`VJ O'Q,H9.WYEgb tx&`jXӁ`87rxzd,IG$Mk`F;kYS0j2,Q/xIq-X?Z3Zg0'Ks J! -ǣԚg30̣N۝c*%w560`SLJ9oaEH8 /f+wl|lbfk˹_XvO̥@ݥ[Ԡg;Fnˢqp9ZnJA5vj|h;MO̴ D =t;Q~t9BIe1-+ ,nB=^MwXFD1Ns 7J1\}R*Q,Y\ No+IVM!.y9Xvi"3`ԭ #ŏO9TEa"Ǭ5={1 B\cfQoS>Dy>QXaί<5H'f̺NVZn%鏜qDD|gD$,}D?bM&$`sS2hh"expq g+nLnJ†4vk|i H!AB|ؐɢ9IL -x7R0Q5_Afa$QwHd ^C;~AnS~eZAޖ Z̛ IR)CS+F?_&_O{6Z>SDNgO0-Sb+4Ұ5^:>g 'kZUR!%nGe.g'8o\p&kYcrI&JHy\p.q #+?RS˜Z_[` .sNp3b?[nn!YuD(8k(mI)NU4Jp"9㺂ulwkfv{)p*S\,dǟo}jַUg Wp1r?pU R3Y-BTkkq?[C{xa5C>5X^t\_㜉I^{-_yænYw8lӑpMo u!g?E޵U׿_`e8 &`ۯ; ~[M]X-_ZBBA=qav3xr]1~掏H*%oXʊERRXǢbĄl+?b"\V?^”Fʄʼ2ҧDr$!3u.rqSLW}[\kVyIJK0t-wsd7"q3Qy@ Ě3C:v7=K1};ûKrLjGÙ x&'>" z8"p1͊],&J׉+Dij[ =Kاi &\q9˟<9V8lDw8cp<MS `_rQꌡ:ZB1 ĵ&"leQޤ~IxL.\BqSs")Jr(X7)IxTנ>'=)O] :Tob;ƨ Cw4VJX63̘51Rl=i ~-7 {F7eՀ)h?Ϗ[sC|{a4b!zU+snI 99V2Psx)X]gzN+DC8gx.FQLOM ?&is̮-~=5ƚW.jDŽɡœڮ1˔[R@0'BzyrY[Afcs>Rh,ȗSs("R2J ]Q @}ZZGǑYlk3mXuSh>l5PKdP;a1[!?cY_ NHC`)qs#s^J)R%A!9$9%Q4q xOxRѷ~1ӊgK3o>*yyltN`Y|NqM7^>Rk+Q\Dgww%t|nm&eӂ+p y4ђJDµhHW]ưyZb6cK{g'Z$+!ǂvO@`2CI:n8h5ju^\YUG:S:P&v} jjg_pTuĭ( 5QJ(T$ޥMY ӎzz:EFM >"J)D}=5۳$T%C{,k:udMT0-v^ %E#˃t@4^g?EgY$DS7/8H,E#1^tbfcl/.#Z֑8GD~8] 8㴁bdk>(QBb ukܥn^YU_ Zb !):lpj4u ckhRAeS\Dt,ۋ ʦa«Du^V) 0c=om/eG{џKoz|xab.mS=Ƣr 4vKis!Ȩ Oˏx3)2BcA ?yeV~9;plieYVD>R.Mh(Gކl?s dq0?*MF8Cc0dI<\Ć~^b5kQ[1MRz9CRRDB6BH5*]4O3Ҽ̱,v{&cI{ksiC6XA lY{o0aִ Slyox|MzX6|o ? ,Pn۹}a/>Oo &GlGFZl6NW;RԔt>$7k5c ht(^;~>{ vcL/gNei3>SN3@Z8 QFڦ(A]JQ'{C΃흖QU/hLdb,)Ka1~D]ѻyr4\ tQ2_o~wztlşWN0RcI*l7Rs#d\J@䕰o|%dfOV>D.NzKI 0:]y?Mߛ,b57G9'x_{ۄ~m^A62ɍB$銔kAZ1LUl&6?|U߽_<ƽˬ@kdޓ w<,sj<䅻h Liqû: cFY7`ؔ.:=FĈ5qy{PZ3wͲ_b߄ţ7>ާoR^:OŲ,]>.<]iM̱=uud͏7꘣eum_-ZY hSgIEuv*PpB 3,Hu~"&NK>:W!%$ H>(BJIQכUx(:*AyZE R2MGeJ+z׉J<ɀ">G1 !acx/ԡ HQs&X$RpFta::K ='W@tb+Ͱ2D{ ;=!h9 y^Vdʘ>iw}y)aTrclOR:TWWIQ_U1^naXiSc^+]}tՔ#jr1µg3fʅMoi+櫹|w+ HֺOk}? =>}/>"9x/?۟pO/I؍v[<'+I;Ds$9t$>KT +MTHBQQrbpi6Ch{ϏTGN+<"Ԯsnv"j`q"b` A/<#V0%*: #k<-( ?u QIP<' HHt'|$I0sDØ&Ybiܗ(t1{8YvٴM`cײcWjiW:}_]#N |"1W Xbx{ qD8 $ xG|% \.vH_7 CFzz)[bvի)r#\q_ %\wLF*bWIe!1t&~9!1.)cB*Y܁*b\gvavkя/ox2>C:s`7O{"[lפ??y#$V!7Ox&7uW=DFuݯzwB oW=-N=.pvoҊ,cx#[տ)R\4gw̸N;\kp*b3s4`lL) &v i*@n`e_X `HJ4lIML{feS'žw*aMYkUf$Y ˋYf75(G[lܧ0]&j}Jװ$ޚ(mpjWeLTfL8A4ttϑLAMbG¨Qx2rb'v7U#\Q=AbDEmVvM/J4wX;DX˺v)ԲM]w`MҊyr/N'jt+h -n"*t3$"Ɏ \m D@z;gWNEvL"V7`Ui{+~q8*"`ˍ'$Bm>i*fbyNł*ױ>f`T[9JW\s-'e4*#d#4Tt1B>?\4ф.v D }څх U]aΨA&ݻNWЌк+5GqCqE:QMjXJ!9ɯ C]X"4zȾ&+)䠗T݈xuqPHӪܾ$ 3.X_au("H1w-{pUuI`Lu`C{gQ>|rXsO/[vi U`hyE0unPIymc y4`\tg%mmG)Ż@xNcE)#fNsQ#E|u;} r9d!d`.t|"lV>$/@c,EHC4*$Iș ~%1e* b>=xdc q4Dfi(2*0F`鐄F:QDABDBhDr6uHqЌn|v:8TubqQTvk!E[.\)\36 ?3 ԉb I9 m9oj#Dzp Zud{6#KI2#A2 jy#U>Ԕ:7C+Ź+ǹh)_FyW a \uh\'~ 2zڝ~yof$wu4K3EO tzpx^Rw 4a6mƏ(&yQ< pExd^2K}r;gz}IʶIn4/&,FټCH$LLcӘ]XaEO1 $QyOb9(ipE]dI!쾌~).yNudJbb=:q[ń7_M$S r9Íq9%urkJi\@¦ES$CPchUN2n 5Pt(Qɐ1%Y+<:Lxt_o-ѯW} sJ^j j4y8R$t7He_}y/~ ޜ3l?6][,/ʹ0ERՃꩮbeg R zwbzwEHp  ;+F0Sh±ȭO^ 0̗ @>HN Lh)gH,y||>JӺpD ձQp6OF`Fx{doTi"C?w"UQT1jbgUn QWm 5gMNo[5}c6ñL5Ut49҅OHJ7)Yb[=S\v<1wB(z_ۓAdR-ObgYKN^& uᏏU?}FH=9m_dsOTGQ7ٟ(vD!= RR1ŶI]Rѕ4z= 8tO$¨틩Ġ˧*@+QKCf]m:G9A0W-樚Ec:;pH$glС{,s2IvL~н )e4zFIfepz(gYHc-h ҥ@q޶DQa3J)iw%}sE+,)d!IX za4qXh) FP4a\!QtoKMiBcߚRk" )(t9H lrG[IBNj33kS)%/0쓕C(܉Y_nJQ+5ysOE.$ J1)xde-ůVӂ v]d΀ZzX;!UQI5 `~.\]\\ "o%l%G69{6R2L0D*1 .`{HPʹ60TtQ{%|1RJheķ&R+a WR[%׼|rcªGɚ=H%hKrVɊ)l* t&)W f:f;8(K]8J 'pPtV+F؎,I4E2P{Ĭnުo%\ =nG;ILpI%PCW ;F3N`"# I'ׄ<\z,Dcŀq-nI2Ƀm(#EO!3UR"a3 laQ : q5%`!8q-)GfKΰf (@M B4R%Bfc7V?j)" E`r9R_V&I.6ts Z t'5ql.|:vY|M>T'N11l|!d$("*6;'V=ZF=%#56(F 98nhŰ叴O68똞(lRH&ذ}"M]P~7YTR'AWOj3qu뭪ҿ./Vps΁݂sR/dD @O8|y9jϽ6{5hB+֙\}\T xaVϕx*95ipDǐƤ7ȣd͇@’k:Rwc4OxSc;"p{Nno;U)rޣ2E)&?JdStzEѼ@ERt9P+:Z;Тw=tg.p6 {"3Bvl],x3"B ڛ9?op\yFk^bZṳ Q=$l[CbtJ]!2Pmtԕ_߅۱ ߙ9IN6. ̀NAP ~) 7t upz=%iyo+#M`_\P.eީ}8R ˁB_@T_P J|eWY핅fmwzab4,Pc040`o(EI,yʹ pQґE#*Đ=H֙.8E.n^HVv?- eQذ͢Eq6D!I2Ϋ`|nEg '|c,rSElN* ccQBRJ YǥA8—"iۊiX/}J_R"RvA\[Y=6C_j}b'3TxB- -(@2 /ªEX;5 \=|Rp<%9er9` +#mÃVpڒ&ҬcJy64`Wշ]twM46b0,P`CAX zqlР ,հxRHD t9VˍhX-ܹGSBD')r1m0w4`upNR5c%#%fwyf ApM}UC?01j`Q K~gf6ӫƅQ(+)e:=;^4c`Mjd9xn&y)Tm(YrϒcHw%y,ZS{ˆ<ِnDtJR rX&p1(f7V<\r-)N<]1iX*EZ_MI<s=^/~O(i /ȻͭUoєfwhg:h/euSG~PB&ڋ'+W΢%Esgi C Ln݁#U=kO뼫b>cJ$CK.<]jg7 ΔSLqajFe2I1KF͸⽢0 ޸!7HEqRI{3:e0r6N憈6a-E q8zԩ [Zpua>5װd0Jӷ6­JPZia,SbXw'}~Rk0[<+`я6 GEg:mu:4n&dz/]s"GWz?&N76λ3a{eSTCC3 FT݀F:j*󗕕;-~ghfVqw+8-EJhw=~s{-UO(x'37SwK`8P&m:% >r anRRf҂,`sKԘ]&^gְDH6uձrCd9"w[.^{䄺UmΕ1,n\/r } m3 }T=b+ޕϱ/s kzzWvbugX;f6\nlPgc[-P:%(QK!8,Iw[mP4נ_zyPvZBd@PK6$ЃT4D?$M*/Q(Cl ͯA'G&9tou?0aX mN K#ʫX,z|PQ^jTyjTyR^|3ʫATʫɕ"Iv&W$W+Eq ۫RF,imPTUb dNAG 8\͚ir4@Gx GPr㈓93Q!VS1Ǣ9jB/s\(+xN\>e*zkI+5H$Ց"o,bR?pgoqkU{^txj{gD Hsa*f}W镟gU$/]2^?W?/xw:W/+2n)eisT(jdֆ%eQvq8"y&VAP(xlVk>HOh_X6ᭃupsL;[<_iD*U{.(2:pnhIp-yT`?Nr> 9?i"IWSlr) a6]o$") ];12.sh_kx͘k)!!^\TRI7xT 4HP}Nz/P3˳K`}j)s+b#Kn}zJbv6AHyK0 =X |/Rj;=$ݸ.-!И)J{>`S&#+e4dڌ+'EMhHngD6X9?u|)}2kSFz"6NYa!k>%5IQ4x%'+ْl#?1w{/ ?kP#%waIHFWr'ͥ@z)!X@'Q)Q6";R/mORn:7]>1CE-'y]O?Y>/[\v "ŽboA|t6_`t]? ^~K9nghfV'^';ZjB(x}bsI3^qhdpUn瓙ۛm;a\q<05Wg.>âؒ*Je#[ Pc(qQ<:M2ȴp8I DDaL%(,\vfH])UhV*UIU38 Ls@3AJ 2i7ERd|Sbm5·{Pee r *8xHh4'h.˔J3K&J_):sVڌ!gYfܨXAJz*i4V]ZwCJCl(A!Q2G@wKL;h3 su1U+X##Y/31oYQ` #o8N l<gg#VT xSvukFN%3]wt91r)gV_b tfC?GIWi#¾06dl^};uc-..Χ r5G}mN۝܏{W*uǺc]φl–ox#g]B1yp\OA:0̠$T_1FάUR3 ČD{wAo *dZ&\#Ї阤bxl4 <($ R("Aa]!Ϩr 2S|7]ąrO?溴v5ho>]=9FE@)+b u(iөx7 qD${grZcÊg sz`X}]·fϚ=ԛ ˏ͗55j_E!Wc3|_i/gtoƳyt.z w3-_3b;Inyg *f;/6>&7ci gw- @;J6/Lߎ?˻s?]%LwDnٴw6m6vaAt/+:pF+U;+0O*jFNHwS.iWJes㱣1S=0ix?}t(˧q D5-FRmqyʵ&ݬ@ベL“fl0umNXK=:%sx f4 V꜕ܓ}co9͜#[eYq|l{~69T&;-S[e/+!^9QZ@7%>GaL 4+cʋ+.%Uꆓ6[L}gs2BP!AYPZy%((L0j"1hF SQjgU$(3 Sp`)OP|imqmZBp|[l80_̖=R1J 3\b>;eQbrx1tM ;oyӻ7@v)1A4˙Gė3$JAَs D!% A.3 a$8i(nm *eE9%.JsvQjr&wQ ǡ]Q16ϽbXIBǻ(q(E oS vQ k"S -$0eEXHDrDI'%д8j$iCP 4wK^1piIGBswŇ1x]̯d0j= Ɲ)=f{|xGENxyOmM]eh "7:n'{.~U\@m0vI+Lp!7mLgq~5v&٤|~qɎ8/.F汳In&i2z^co+W F*[TKŕ,hg'߿,gT  >|1Q] [, YNmjsYL2YVyYݏe9qWAkY֮D(8N~nW\R7X:nb Ң郡0?ِ5A+ӡ@VXxaղmc| jsNo;'$aD؉LB8ofR f7Wʷ֥˺[Wc3zdw~5{g8سLNj UuH_}Xk 37oz}li5_u>ջ~f[gS#]} 6 _{C&\N }X_<pGf]nlWOrӪ'~븶_ǵ:oߐ-nJ{p kJb3,x< E杕Of\+㚇__uUv~Ŷ ClEkɻu?~HkNuI9G'nQwJ9±x plZǜ^ "mLzcv7ը&\ɄX>[%vTA; (:xKEvJ Όs*qpRFJx:K .U*H6}$$ñԎb9BVU!d2 &ĜFX N5Fmb3 jYûbAWDYL0RKb-(+A35F, "qh(H 'XKJ?K! !i{r[#O^zض+uPK綽XV*pcP\搔1iA4%s QUIMPRG@@ Pl\S@8+_K bn3hMoF2'Ut!{\7CS`N4Budi*dOk n/{>'GTAپUe䚡 ,ο&OWB>njU#o<>ǻZK0#&hHFzi-Y X"a `UV9 h8!J3Lut:L<hՐ+\qւ`&BfC8#T  ]n g3I%ʴ'kGYgHUJ LN"kx-`xI0"CYF`ƞyDr8x]ܛ2b ?ow+|o6P~upyE_.LVC5+pIB;'J(b9nrxxv% w?V|ah ,]}[:}ngo2e$yاNcΖAY2bO1op2\I}2|Y7/%03n}jۥZ].ݻ?ԠqtҮ[R,O1d`ʱ^r>չr~PUY,{Ca<0sKdUd( OEwex|Bn}>yA.uWY?,wUr5PL?s"> O:($\}eȖRS,?@zAf{JO-P!-Kwl )oKY]A{ܧeߏ:^Aa6C}FQu2[ޯbųm:ܙQ>0赃:|tbqG}5gVVHy/"zPІqhgci*UH.Ɛ-f$m"||NQΎoZ鵃VIG<| k9ąt҇YHBJvh 0d-!lhƒ /Ր:׎/T듢cE8irvE9^d0w>v߂mtŽ2 q;UfpJ0b]&hR8ps8%`eCy+%)@%g$vtpު`?)%(قJ lEEI!P͗C'D7_A:j`x&Fi0^lqy z E\ƙT<9c8D#IA(϶Tv>sOkthrTz@,l\;x}DeF iT>4`@.JeeL+{xwr+v|߽kR|y*(!i^1^2PS2a}$[;YKɿZ_]fvgAL9UީZWm ŤNj mAFSG^@PmDFn C2͢I׌fIBŬYR/O+rMW+ /K Bcvz4b.'NJ4Ⅱ!5Q@N.O}iˈ җe0",L~&L@|=嬀'BS5mjLͥ^plZ؁\OInr2d$}a쨗6%1!KI> -=&;բ:^ :4B:V&h$9I S@NDx!b10%mm;LFI~?\f6.U?z)’-cdU DnL<fs"kY1&^ lNJTnr" ØXhQT Wb p3\!Z@ fsm ugDy.1qɸu&:0q25 vTxd6)iĕV:(AVA'8A[;0d@ 2je8Xf QOAL !2ePZ0VP"/'w1-|&A78U! ʃ#,cd!ڂ#J K7c ( 1`2MQ ( 5M.T!f5@?:a X>{Wܶ /܇T+j.XP$'= %A B.K$ALO_w 0`; hC3FȄ)9e8}lxqqlKoo{8ei_r|wn&$b?v%$i5_2I׿VX/[/2Ojԓ\n;`n9dJ AWwG^*Āb5 GJp>_lqe/sC)e/-l؟ 2 )MȕrB Y怨h*Jl8qH{њqܛLp)Վ?c νa?XsO.ͽT)k@S+so AlC+.M)L8>1^;A3X+L=l6w^NlجpѤ4[ B-.fq1o4N6yqY\KucŢT>6\,0=Z\,:Z\,HqqúCbqb N <^\,#ō6xB}҂H`csm XkQ\; 0"@BMp'?Z\ zX6K7%9z$hqhqdhqdhqhqhq"uV[׋{̻ςj}-Տ>NgWyܕ;b)xD0jbpVbV=ҾÙNlBƎ ) &VAI!K.5B#ȠL+ )09RM]$֘SF ZS&$&Kv=_lL5FXuP`!c9pG!)4>inEVMYQb80S Tz\e6y8+P LN71WԚ{̯Ӹ_y<<¯IgAG]pǶ C{-7*Wfw ʼnP{Ny茪"ϕ"jx`sfILr [? ȚR Wz&[JĊ&G+.ſ^4DˤS-1-4ۊTHݨN.ML5ArwG\Tܝjp9HFur9pw).WF:;ߎz\۝jMxYcjlg)@TRJ_5%y,cM6[00ta+S_&p}β'c]0a൛,G>OR,Ryp+",J\%ZT z#1GRyPEuRƻ5oFy6[r*SsM۠wKAIc\dޛwK_OWѻա!GI:y7K=[*Nx SWVnuhȑhNQuGލg-UT'um[sCޛwK__ѻա!GI:+vcw <:clݚFj7륢wCC\EStJRZGލa-ꤎwkn}ȴLUnuhȑhNok{7A!xTTQ1nͭ2roM-}jEV&:V&Oc̴r"$}oz̴ 9rM)J~ #);4JclUhn]9T/3M?}jW 9rM)JZU9[ޭkFJ[R5@wkAk~ck٭j aDnmeo۹۪[׭ë$EmgmBnVfVKJҬ[ӭ&mgTn-HVK`)Za֭ j ֝]F51w=:-Ac.cBЮku=fzuZ1h]scz̵Zu=fscV13z]NKX.Gi \1w=:- J[cfRs@w=M3#RȮkEž"cz̵ZB|Uz,s@y13 z]VKЬ=fF4z]NK`Ī1w=Z-ARѾ3Gu=\%phz̜hs@q {̜nscĮyB=f1w=Z-AԾ(z]KXQu^=fowVvcSKPuJ `ЛEz}7L/^w͋,|H,*`/wϯ?0Ǜ-^W1Ȥ?nzJ歅I۾)3x_{1 '(r^'fa:%n$,&6Xj<8bHN)I* )=e[B i+̑iC[%iN9c9Q 0ўx: 0Kn\" KgZ\ LOL0@T1Cr9~R*7{M' 9`~Ĭ "p??fi`v2@iYXZy^~z@6-Er`-7b\&9 2rǩa2c1ZB ADZyFu L<34p( Jkc [p_!%iRQ«& *"'_xn9&u;p/[_`Rk/rnH۷ɾE,*_wNFC{n2?rpjxb/}% <.@F$Se@JY!®2M" c7@@sA)(cqbJ9a;Y%X 2hOX(3Zylb `Zy֛uzsہIkN9m%LKRķ>z8VZӫYts( 1+,63p2{7 ݫۜɂ4 > 緟J̋[ŏ!-G}UdH({e/ˇbv^/G#Hz).3N|p<_WdͰ1\ |.ŤrZ)"e]|;6>'$* #$+;;>I w{ #n"ȼ+*/)7HXȰF8=ʃ T '$9KrQ)yy3DEb95(ĉj5RJ4_Tss9 L.h3CQ3ӽ9ɽu.Qy~/J9&:اd߇)bMfqM _Tw) 7c^uԩDXiɱ/&x&|6dӓ| XpObV7] ي Ων{}ۋvf+v&7n?/~ nc Oѽ&#WE(Nq-U:fQ\Żt`89 7ŧNr fG>ɞ+ 7}Qe/uߞG =LKUzNfѮn vS[npc}D+?4袃&gDPa]:ichaAH Y!v+KW"睸އre)h$94/LmcңTGzSҚz$Eʃ/\cIB /${sVRBҍ+O4u-#OQV]!sGCKp3g?[cp*U7pᤏrU BZcM&Slv*8WHR^^z+m&w͛BPlՊ;`b?~J@ٛv^h[Y\͝2C7x|qt6Z.l5ip+͚.~1Ⱦ7w-ǫ7hloG3Z• B!TxVfO_w>MI+$*J n'좫zVŇ{W1j]xd86{EqD<QÇ ޺L~,](+s>#lS(=6RU8Ldw[eqUF#yv)jj{]q? 㛸3]sFWXr56ɥRuŗ͗lJԑl'z@RII9;Lz5=zݮLH#Q}$m"Ի zŕB7 $T6N>ܯ-ozf{".g'Ӹ!KwOKֹ>ƂFHmL"0SY kŞ Uv3^U)"uq]I_]>8R͜ɴᯬ[h9|}_)b𚦓ɼ91Cśs]7[S$J]?neNrB1 g1$J rm;VaC,S^Wiؽ4ӃuGX8 q2oΑ(ee&2qr\^ID`LܑPNp,15@}M5Yi$&KIPb)bDWhE8Ts; QbͲ FD1-Ğj;2߮)$DmR%1̄N tB/QжjLAcws@EMa>Y&sA"lqnRk yb[q:^BCwɯYr7A15HP~b[FUrBp;dґyQ5c;NʑE8k@mfQC#I/hlSK=w 9k=lzXIHE͘a0Lka bI)H%;g\?{!rP80V"l2mZ)KB}3('[d ^wu_6F~FpIyG. @;-u `\j~{h>uPrF uZcqhzeхRS^47#Jd ;Su nbp>RdeNy-<0gt'ƳixDcyb!cL^-I `TD۪RaD(I#'OH&9#u0="<_⡨+{OĪGy֕#V!N(*DW_ ;yNb-+[4DZB4}TgLLLw9Aq^\"M[Űica`wJ{oVL^ẼJ5N 4 0a9P`RpD;[.EU?W?`|:!0ͣvdȁ3a-Z e?WEZ6g/C+/ukCdJFWR^cvŎBz>v TӎqYbh{FAW>꘤Ƨ:yOxZQ>QϪd (rmAnw(e6!t HwH p!Zt*L%sthm8ImqOoT4S!.ZreP9l+,'.)5˚^z5[*rAN&VXjtIw?̝ ũ?zr:lD` 3EӧhejRG-Ȅ:k+z Dޟ=Uʣw^kB:{I]-=NtDx>8Z R-ӽ@#-{qu  5`|/ލND#PզqB}͐ $)= ua7zIh۶{I)]nT5V$㬻)Л]StMlzZ51|*)V]};}5t1eAdjiT).8W/CԘܹ:+6ZWѺmٺ*)F(O` Z4F ”JWSɹ:M_(Wˏ(]NuZ$C[BUAP_ĈGn)CA8Kce#8xMeFovHJJ唺3sUeݘMS')khQ\d"FIʉ x 'Z0j=SQG1}G y0Jت:V"xO!XۈЄ[jIT!ۄ$Q+6.m,{skQA |6` s'KI FJSso㷙ºEמ0XvOc9~B7ſTv?Aw!!O(\;-qwLx'h36duRYt/jHP>ԚRwk܏ܭ39(6`F$?LL .b _oI)jCs>nsw ^/уGnÓL\*ŨB_!6:g4CA:849]Cj ރ>؀;PkL *` r/Vs0XnV]sA $vd24#$$!᫫A+)`&Qth7WmEsӧq4»ɜQ hkTB`%Ʈ{ :SZ@=|w89Lhd5x"@r kA)z&mŘ"`9P Z}Y7F >7A/H8h4dq9bbIwOI|:4X1O)uZp8g_ż9w,dW X廵hiBR])!BD )@hmړ $BzRFi[5<-(8"psNhla_n@9y<Q{& G OD^=&l˝:`Sn$l(BB+I7.V҈-v|@%oj5(lCY # S.S7uDJJX"|Ȕjm%8moĆ/J/tik9i^FHF0ˑ%ia/ݜx(ANjɊ}jdJԙ9:0b*<qc;*bWBoE\ ~;U.n- KdYpL5X" 8bzi=J~| ȞfuMpx~is ޕq ٗ٩'x*f*HcL$ߓU$Lo~M/'fAI<JFawT`n:q zbV:H%m1BDqgbiȟfq[7䙝OoY`kbmE%DCՊ;ldys(vg`Le<&F STa4$\FXȣd ,IoiapnpP^$_x+i}z(Z| \'Jz)1cX8~QVW/]ATe3h)4r->x(K T_iNq4zÛF$`86"I܍x|3*; &U,֢^fe #Es׉jEF N[3KCX8qx%e||p- )xǰ y"*W<82B;iؾĒDQUnЬŽhCs+sl9 xb(E#e91Flƶ]:Ke+Fm.J*׺`..;RvB`"Lb4pqߟݢQ>&6Ϫwu}Cn|lT]]w`դ7YqgcOBݽ=u25#mkl`N tMi_o[ȿWJq̾BTJx9J%Ϝgj!n "@s]ipa/$TNԈڒtN#J=עvuK63joKTLM$yE\H~LK$:WTDC'`-H}[j 4ءf ܤ9K?לQwn $֟rBCQ ?o0o&#m3+O0]cp *6Β։i5|=9 oꯃPߍ? lfx c'pgOk~a툱>yrwpB^kv/|GN[kJMxnsX.:Bu dN^0>9/TiQJS.hk&mk&lX,.#/N|1~zkw/Y9Q\S2P9QSr8SS#KD:%PmST֤K%ckd#Y44ؚaG 4m+ۢG5VmJ6_vEx֚G${%@43tRNFBhtZ Ƙ@8(W&jH9`ceF bR' [[@d3bK‡ ^e-@ax*)eJ0iaT(4(#ZԘYyA2gfP aU?{&i5S"j3~%<'lH(<[IvsKjQIg-=E&,Juz< *a$ȭ/PPZWQj^1e.^ {jP-,#nbU h¾2雋郮9noFw?ӏݷϴY?Nrg^$Z;R5Cen--?}GEc ߛY]4fMV5Dz}i-a6Hm+*|X6*+̮HMxRq8cJS ̖~|ؔO!D]fj|3K lCURkGᱎOV8VL_scd$R=H%'/9ߚ5c۟~zt63 kDOԵ@Z %B1g՞p<Q7>ӶA,FX"r:lB $%RL&J5J w,37w˰Ջ)J~ёn7f_R쥜ڸ]!Z},:-FV[9QBPO4kI;ҧ+;`"H h:A<=%u1 6aR{~mA0`Lڒ+Y3 }eƳ2]Ĥ.0>|(߉zzfMl^ex/Ԕ'8qp\%e΄a(g &bLfK*SzHe#c1?W8&W)XߎCQo7xTF AR&Crr2/ Ʊ}R!ΐ44 Jb66%C6,'4Th͍D4׹Ժ`L4JIC2E"G9*LDhKdT9  )T!X2JT*؃ҦF9SIP y)ĺt:1N+(^\2Ǯk(iN~*E33E%C)C: ӺLҥN3DD99 m@{A>Cڳ:d(Sބ0-*Wѱbp9r %)d?)xP=ݻ w )F1Rٴs/{WL8sxIiWoَ}y2^w `tk:xcWɫ9Ǯl'iQ{$G7w#EF( 3['GYGۉ@bdcJA/!W ;( -*֤c#L,gee (3m!!2Ωx{OS׸ps]jӋGc[?JWJc.+c+rxkKfot_Xn\DxpR#F%|&~]q?Jǂp &vD`JLƻ9Fc;H~1_ǻ;;ckpN($NX/syӹ̅lw V5%w_1 Sn~3`7 {Xx`×v a- fk"PyIp ¿j.o*-u/K7=bJf6~)u 1Gf8W5S˹?<_0?\h]_jI'U%CvwQV`}$x5"P#E$.kOZI̾KUB(KPZ/YvMVS \5\, ly\P2 嵍q5}н-|QJv% p ᷤ{kma (Z2 T\㼓<kC_+^-fXL[p(u™Nty5B\d4ir3MbQA`x!m cp2;':@ZTkw''sBTS7k0Yc{@بw+A1w!92g,Ʌ(8! ;,K BYZBT&{ޱ̥8Fµ j*']?vΖ1\?Z zitsWHpv}dL]n I;x]׺X0,]w1]i1"pA[iKI,"kkXwy  K[*0X#X] 0%x2} JPƒO=TOqyԫK6zRwTcqMu8W?Z:e8fF AI2ĉΓ rI* R K[[,y)@uBӇ]M2uiU(8衏2/O"oa)a%Im>;=?+*y^?3d̠t "B%u.TA ^V2Hdr|<n 6}">EiEE5, T,BS,$7"BOZ'31꟣?ꊬ7tn[ȿ~.Ѩt%Fj!*` ƔI.vnn HbKd\( @~Rvטv1UhVj7뒌tRe919})7 rFr,k$_ >ՑÏ,{`B *K=k 0a$y$,Kΰ*4^\--¬6'|?kR'`='`m/\sTQjta9#ɰvS3M=1x8Qp12bQyA 3e'Rc4!s8"*3㔸tN3 d `b;k<P/g6@)e^=K,o!g?{nH+3eA$(JutJ4R,UZqkFLd:C)H)lɖxʖ%ET/|+dRmOEV3=Gºgw"v]so1SR.բὥ\k\5']khWmO?:v_wttjMFxt[bL)[(E>.A^G릻3p X{!E^jp$`i QBcs'RWQye6;KY#]9djO 8jU YQ)DԸ^t\yIe=׃Y@0:Z-6! *bNz8dJL]cNA=q5lP7 !!.~;>q[$Bx++mAf4 A߽͸J!_oͥ׍r>I@( _t(YlׅZgNb6VL'ܧoy ~=sǤvM:e!R$(D"׉1i"G4i9|qX9~OW;LCf^ 󎗫AX gŵ)bzO%Dڻ#jg#s&O\۹r᥁=x6P_ xjg!uʐ$*fb`ҏvW0) 8]R}D$6) T֚f6#dw=y$H .]~z9'L9ǜ;1yLςH>SO@lFs$pu88Xu MAeQd(#Y"HB]QM>CՃ `M4$NUfZ~;3ʺWr椧>輣о .WEl\c"pXA!?b)h:Ksry*.p (U$`BBd'>5 _݄$U׻e9z,ߗ{M=;g<Q@56,"w*_L^`;^d$6% /3?Q×O~w*֭tfz{3wmf2wN[2.n)b7cel2b0]MѶ1g ȗX>ۡ3mX#m{0ѮR5; bMS|_9X%⧞5xi.$q99…#Nޕ6r$Be+臅=0vfm<[uARn 'HI%*IIm6nȸ22r+v׍c=uyuKD-s^X))Jm +E䇕rVي-{Zn/^! փ r8SFJwz>9Qt#rd._쀲R5PT>Z9n>j0$ ]"KutH| v6F)= gݫF)8c{c߷~ |C&pOb+R~(Haz ^ ȯS/~ ͯ鎰 ^7gfv/Rf+'=㙃C5ZVgї'ob:t[rYSYUECQ{ - etJfZc1TҼzCI:hϽSI8AGEBtBr,T)^0x E#ȜdL̀R뢄0sv2c FA^_yGU vGLh&a@QxeˤdxIA^x]Ur1Iowv2B._)0<eg? :QGw^ _h ޵V4##^wQ끛+OϯHxk^Ήdz3vzyG];g;sxqC:"v?*-VX8o鷝ۼqMBE>Nc'Z!߯聱Del!G7<1>0+jUk,x.}= =XW}虸T}F kՍܜMc8Y8p0wD= ۯ.S~  vVKS|7m`6g{ع*UV+%méi#ު6Z1e"';XkRs ?%t2&̯NY QStz94Fgm6خn@H(:Rk0kKfل"͙bDI vP@ed:RI2Z!OȬ^HЯMf nQfuHfIp[ROCwSz Xg)D!}Xt Ad^@L,LIQ4Y]yA_ "wtN 0g%dƽ"wٸb̳>vLEZI9: e-:e1#3FCVe̥0KBK^" 6Q1$%"k%!_bf+L PTn UcDHVX\e5FQ"0tBB)C1gVedQJʒȳujqXuߖݤ Y{!SH43T.a"'69gUPRd=rPINfD^JvA37aa5AfeV V̒}_% OFJB͖-U%taf ^BP*6R]SlQ9WD[O]R"e^Uò]+-/j36FpBMLRj":rЮstU_[7{Ef0vMkjb>ck"ݻչW-xqy}sxzcwU~ZL5^b 4nR㫦+t&ơP{RZ%Rh&,/XZШZخjOi'$/ֿ\/5gy jk5,lywXPBB+rcgd$DqzϚ -Kg9#Ԟ6mZmBWKU83@Z5!k _uFQy~^g4/`~6OF M!j}N8TpRI'm>~:J2wAs>D[r.MЄ<.8sd|whG|cF YxQSЧOu>%*=m M+/_nAm@ȫ9ck߳~wjg 2*~׳ʱUryn:m^=}sq8uØ;az Tg4k/ܜZ{u>bP J3-ˋj-ywfG|Yt83ԚϑoIX؝6%68-(&*ʌ`ZzPH_lsf69ɦq^Hb!If;V{Zͩ1} `| gYwFA_[cDM9>_G4d݋zLFi&3]\Y!u U 6؅!Cf,i uޅqNLӢ P]JJ`ZlNL*Zo<`MUB`>]~q([g$.?4&DMj=5yx{t}gjn!v_W)Mϱ^/1+_"FO1bw]C>Cx^Mm<]+6f/E3M #ݏ$U>~~`̅9N6Zֺn̸܇$#% nC,9OYmv5ǐYOU(-/aZ?QI`yDE=TYUְuy}uFӕ=}{t>>yǕK&;Tj118sK:1_t}5P?/2Sc}0ZKPIkH 7]8fG([{+TR'|f{}F_}f9/h UyMjULѻzH6l'{'KzaVir*Iً1cgtiIՉހq />:;4bhK=nB5}|IJGY_~W̊hhO 6@taedjЮ.5>n`qWAo*8P[ 6jY eGZ:Ⱦ@Cr-jx9w!I˓ʗ'mlbH^먬 RfIIR.`:aNJd`ȐoHe蜊=AC;_g5֍v!y\]$&/)ƺH*g肈6zU*`sF&Sb;I=]\`0!kp)rJ\B91&@`[p{^H=f ; \{(x:ʨs9FbRt%(~䉵 '`nENb[OF-%S Dc!׋vPҚLw* XoNLxkIEm+HaX֎=P(/QVhVizme:|bG!^-V3jh-@KТVʅ'0aFǮڐnb]ڜe|9k4خj- ˗[]S-k/)iU?ݱ켶 j8eC Jy7}Z>j(:B6jI VYdb^0tRw[8?ݶhNːA;)| ' )A=nNT k?vzöfDg?ἥ-їҾ&HxC/D(\Q^d#Ik6mt!1iITq2PDQ=jɳYM" uY cF,Rv1$> eO#*]pMXĚ A6_PDo JBJAg'maQ:V Vػ޸q%W~9 ;9= f0{ IN`mWR3IdU_X˦QO>ӱ(:q˳> jbX8;rӯߩ&TAn9}#9^#pgP.wiW] Kt{-#r91"]û^F/ٶsj/_&ǫj F- tJ~;6̠"93O]հЭ A ]r·IALjY@  sֿ@ =cGeK,u1āwƟZF atڑ{^x Ɣ3)ɫ?Ӳ )3Ä"DIj`zٓQVx樉 89wQo7Nu`>;=a;fݦ=&TxlDw(]$j]ŧiM\%Mۄo)m9J^ #}, E&,2C(a`)Vw0u᳊nDAR"QA P8Th 0-4cft\宦`c+˘S+3ibl˗Gw,C~\jHQfӫ.-K:i|:3]38FISP\Fyo[ DkLjFA,d E]X7PtsӟɠxlLD}Zw t_YBhq)%dpU->wn䢼{}& )yfJhlr(n. ZR JRr7h0 цΘa dd6,Rop)ų6\W,LɌEMJUifLTdъTyAK8vX`^"7SO ~\QnF #c{w_DW!` 2>rLu)~t\Q[v:/oZE $~V&}s#ַ.ܨwOmjH6FV%8= ?]2iAg tL(g~r7nt;{4soz}q)]^Oէ/շ7T֭-[V(B5K-޿lC}XI9.!WdgkΗmHDk8 ټD v H lOzCQ4 I?Dhc!@NoOӀ_aFpb6֋N$?A!ỲES(@]Ie)UbM^ֵ|pt."iaNwA=/|eL!!we^H{aQ\dV/x``e鄇Vj^H(6㼧ʾGpф \򢗂0F=Q:RkO}H?lW+ѷV|1kFa,ʩ՝޸{WdԈw˃hu։['oۢzA9fڔ0x2\؊V Py)AP H F[?55"nz(CF5mAF3wS5_CAbyWxWmV1Ml7ó ec'p㹮sMesnE]R֥"0!+d)2[Ř&|$TS|PmDh1XaWXEɵI`heQtHUJ . U_m$<"^;ڤ>Fyfbx9>.!yP?rsq57w[wY}SRw :s1ޝ+ qS]+UuiO+#lmzLƕ]nW`z?UކH`_%pƕ K/gw>v;q*L3BZZR.b^%Xt-,VS-'\Jo!Ty*M~30g~UcZ.lm%g,'i!`UUZH%Kj*=,+ V~ ncwU>Tv[)%ִ4ZwB fۚqm޼qA )AhѱFDIbT9mv #6) 1FCsl^ Ѡ$}=AUxF1Q d>CP8F2O>xG GvP#/2 1t1X.sX@)!"FկCɤփT%O'-/7[&`;o&-^" @Ravcz;q`D Ȥ*ª,,H(/sպpy a\.Ս  i55D1a+@CB%5(@l6\M')Go@Y }LkB$zH)/:D,w{6lcrL1FS䮅!~yY+ 7z'}#z' YNږ~6V^'Rj4x"6xtrw/RlŁ'h%k&f\u dN NN A~ɽ}ZrPU+ڂ!s»Zud*f޾W%Y P0-YJjό|ƨGy4?&'A7ܴVubUW\ #RyUKU"LjQh2s*JIu / UWОP/33;մRJ(*E Mр e. i2&WFX yÞ6 <P C8:x\BΉִҒ@&~O%BSIE6UlO6K(k豄A,@9hň{p?KFfxb}bUTJ|)) mLE5G#izVဿFt⑍Qd ZxtWxd4.:ZrzN,r sxWM B|]ſ_c#'*~3TРǕh4N#djM x`Yy("K 2ENwTˆB OYAr x Ze{pk}Vp2GK#ԟ,6S"p[eX[ֲ&Vl7>tk@XS_:9J3A5.pRO_Y +i0;|-}GfDã_ރF.ZP]e Ǭ`2[*V\nuY}k|^:pO/rx~ZwRɊc7{S/N_;!IT?ڂ1[g2&DdUU FB*C̥i<@5&DB(&994e?w[!*,%^AEg4Ƃbtt0N$ qI[c>[0R*ASM#,~ݮ۩/G>0/e'2vRjۦBKCi=(#UfqZHt*} Dž5/ACA \xrEOf,z3#NøOWȿmJ6 ˻FH)3e=ɢ >QEʉ&Zu.8 ޳m,W(ι0}?ۋn4]ڪeɕGbYJi;HQK %3;Qce *T:_@P#VZ# s 3R`넉B Sa p1FȳiD!)'U2MHj15Z GKɲ4Ǘuɰ5[P)B#e] ΍>9tXYl E7\I63N0l =нeރ`@JKyio \>_bKhz? -}VQbY:>ŜRP3e7X%Mr m=zY1UHNo4>*il@U?Ac̟U vZ!*p8G5L R0cui 3o1(D+FkJ Jm~J#W1jط$,uoEq?̪Fc\#w@Ke fj5j,P.tME@:zM $q񹑝!eAt%cIPP:RW#3FlR*$Z6N9i-MuĮZ+GhS6[:rhnDk`Cuy\DI}H@4؄xQq ] J` zr%3\f-r? H>~J`dRYPGԱ#%k G:t`bՕ:9P: z<"{<diu׈.-1NG0;"k):(KIqZ|Wb7΅hbr M`UGLEArCAVi5 #Bɸo987M.T p82JLJ>c1`h\o=)Z#!4g̺IŢnR @@]w[NB%pZ)q\T J"|PQyLw9W^[%\ZQ>K0EǸfb}/cM}bΞ0#u h>a $հҩpTJ1[Y@t,C׀x6S E,83&!J% yhG3c,,$qD!`8jq$j@$ J*֤ 7`& Jy4cPe_}h<^ۛU4B^WRJ#T*!&F\(rv8 #_g_s;E._| \W bGp4NTsmlvuμ8Kf\w4-^Y<8 _h&-ETfҥlj[2 4BX$q Pk=*;We3%B nԋSM["Qa,vA!w)ޣPTg1~i }FjUouhMS ˦:0ሞa$Yzf]lM+W./0PbySo')&(vf:*+I9lxD&.x1-ͼWR~j#D֟O86iڟdZTZ;YBNI*:lhsݙd\wқ>N݁M: π͸Gr.ٙ Qj^:$*$l:giIz)2ǽ̸J3Q![A 6U&)L]𵓖d$QXCCe0 ¤vpY;P-;\i$hCYY"UaT譌LS8S>SϸL@%eV:{_9(]c$.-_1%{sEhrT6 ?Ǹ*J|sqUV)+ DR&QFUWU?sAw bCSr%m$!v;{?s\C~D;[ݭE(! $JH_-ܪ3~1Suo}@"> _||I'}e `7p}@@!\2^Ɣ+_Hۆ,oK"!yG,If9S% y:k18'qɅ dJE@8ǂnAkO:p*LLjJ0e:9@2"B[Gqq]4*rK+uD|Ik,M!FǗ B:K^Y3=F1 Gs6rSSLaS,@e3x B ij7ycJ#@NW&r͵i‚GaUԁA8UW頼:7)tԜE^ z~ZX?o$˷@5#-HE+*Q6;%2zT( .4;P-~Swթ\JcPjjйԞ ;^{H:/9p,I"%WAސPHdQJ<!A|F(Jˈ'a 4D0Li:;P-:;P%BHnq0^Rd@ jq+@ 4]#rRa | Mq&Hh!o6 {zKmnNL8eFp5c,njt3q߿OI$x|6>$ɑ7Q!I Jsc?tRXDo?Bnvh<^9| mWE{r߽.E?G'] yvCogΛowϐeOoow;'}އy_g23GzjjO}~glt]|z>s9z?Oo~{ӟLYsu$s]9?2b?oc~'J"I?40@+@8aPrŃüI0W᧙=wŇ WQ{e] o3ӣf=l~]x/*<O_0l@i^_95ixuҁs{vO_Ο0ѽq4Cf{iMxv4N_?;h~4-b/~7M>N~J_v' ff7i~;7x~DbR kG'p_֟ӗcu>oOd=nh?O r˿ii}_3d\qugWap3ζl&O7Dp\'̓0MPù eJe3WTI9ف- ÍZ6j邊? ]𾑴q r2k8Hk0CNi`cnGFhȌ60`Э6jߐ6(VlVlV\6 ;"(Fģ,Z:'Eֈ>>Hh\E%ջ xYRZ(z Ws6v5'G5;\cZ{^U a?5]WVgE)IX_w *&L DΥR̥"@HZjI=T `l[ӫ5%B[ӫ5Zӫ5Zk} b_4(IEIB.JrQb.f?zk .꘲(#;5%'֔mc3">M& Y!JrRwg(ɡ$/hr[VIÙs^A*Pυy` ~2P:@O;!bd鹽NkuJ^ EέRj V '&!ͨ9P5_n`3 鵡PFz@8=V*A+$AJ "G#B5O C~9Q~h0d! `C\!nCB,Y gA0$G!WS /Vǐײ%zg$Lov k1¹u#c pHx8†P3qL1 .Mr>W}fqp,Ybi,y$cI.3?eL#Sl<祊b*BPmuA14\K"Z R' AAi-phbU(Ro%T iɺnԡ$62bJtB !Gy 5s<u`' 3@@fz,V5M@LӤZZ*x.Zyhz- `{y}tb 䆩z0ֳ}dG9 /v^V4OۤHja)5t`Z{ƜgpjJ`,Ir=LAʥ9[QyzHBb` '󾭛 'gpwԦÀݽ G b&D?~y{?I\ 2~(VqF+|)ex`GM k]^% t9>6'r;><6+rݴݰV fݠ\mPJo.y]S' P$Ԃ;@BLS7]WO^ɕA bn0k$R?NHyC 7$SPs΢ Z.xiGTkI4&l g9 *m][E) Zz]z^i<}@33(29JR h#z.c炋 jq>]Gc^: DoַV a+pv.PMG+-!S8jʖ)yKN}h g>qw@c05B>"ꢄ t,Z-8FW:1)cW3o|)Aݺ&p6R=g;6Nk_[t/.T \ 5ӂg Zfih@{L nEC*j3P0G!|U!I5\+uq-:S:[EBƸu3%y Ζ!n8jZ#Ls)ԄS7S&!&x (_qW;O2wn.+Ha]./>" eR1 /З)hp4BТ{<`-\jZ%A=Ab53"Xǻ um修懷M;RRRy*CdyqB8KY0 OrE4"cL |օ`XJᔥ:kQX1Tp>7 wlkϋrHdB>XkJV s(&A j=XI|+Tz9(̄C_JZ \Bu7R7jye'`1({ @Dd``/)3y>@ FC"c,9\9& 1C-r k V)7GRů`࿇몝}k}uI;+m0 !OfUh [o8^kɵ?/,!.3-K+_I%&i]{ ߸k_aVe^ 2u>|0J>,4 ڶДlcup,3 {Gه&9jժI>a%*Ox+0j >,Xq_*ǷJrIv(Z~f︉%">欩zr;X>qT"s}ɇ'X}Mp j?lV T)x^b /,7~*\pt`2J@;x8U;wM0/TS|8\'\`Z1}fi9ݑf5;fKtGj Oi&&|گ4/L Jħ}sOJHm€DXk |گ(-d;Tkdܰ %+iOrJ e9+FVl('N8WP6%z6)f(W7j7-U&Cx^f`APP ,\Vљkwߌ 3YԼ8M~qqL~w&mpqk77ή+S܇0t$u&Ux|^;+ځo01eYd˧'ykDI@nR* ՚ 3@IȆd +O6UNI9Z,HpOHE+L$VPUH=l+dSK)ȉjII(3 J (An] hop #8#mwN^4|ׅ)S.H|r 1[?d軐 @=Iq~<[U=Ѭ!8(3-t6 &T68eЀ5Hw.cI>Tr3E$p#Q%)l+ֳ]N3 ~'tӷy;<ʲ^06gHhq+`}v:ؐnga~j)0\1s л? )/׻K_Op^LSl{p?L,֛\s~rr=LJ_ᅵovvϯ;Mon QE滾nl¨5u9񣋽.ouq.gzd`x_IHc]so`rG Cb3.Et?&٘ Hxa ߗ^538 (#`{,lOf{=F}JnQ35}ppjNr'sl 䆣p9{pARֳ?_d*\$H0n=ui4&%ׯh>rAbZ)r5OvQJӌ=xr?tV .u)8OBIW#n?Dݟy}E[VhxYLĕ1e;Z8JX[*ԅE,-] j24ō֮'3_EI4- x\ B &1AD#L ` S+P$$8{Ga#cFZb3`>%l;7ض m>6PbrFNk1I 781]}̌,S*&G?]6%ؽ:4{ODf,\@*-~<w@Aa&DEDTpSJ,C{-uFÄdtZ+Q}Hp4O Kl`6m18NEDvGv%\T̸t*c C0E Jȵ?o W'@\0xOw8wRlO<RỴy9"b*23)z:OAH E!@t3@ϚhwmY3;[J}X$II06QOKL)e;S4ْn%eSԽz=?ϺY'],z_M 6MӚҊ /9^ZF+cV6mǞ߿w1'~͞d~;kzҭUS#m> tj5W[4`%HG7`M'[-ł/NvwlqpuvI[Fz1:k6y^A&gm^醐71`e}оvYUN x:cd7ҝyog&]g7?]6asF))1e;O6=wS!)ֿ`ET &&C,X[/3orx#}(7[?Y9[VN~SxeDžYe|4 i-|\!,uL ݕ h+|].4ƄZXUHɬ,kepߙgY,ke}N0N2lɓI^}\s=3ƍ`>'Kg$ h/2+~`eUPw)#kN&O {W> e6lWַI(̿O~~~1/ٚJW/yu\}>sO="BSXwPk`-͘_/_m >[m==B/wdtYyrrrrj{Γ)h-JTSDJYsЁ(zNy/_ϻ_|0WFfv5h~^czlyR6ڧyF8Fm"O_ Oqqs'E ?Qӵ&?/'ʹ3(<~R9dv_N]z}{->",>%v5 rM<ڜHGE Ӧ̉}+G,sӼFˏ̻*fVo#?GtJz*]1Wɲ޽Ylc+Ϋ y<C 5 }*_`.apK ycmt\W..ꬷUrN(3"}܌ŒgL}=S3K4?=~eO1샾?o&)'j%)}LD5໅>b*S9AXxH9!L`c6WCO==_~fOhאSQk+Er=9,o!P3HR\TF*&y 2AUUo]- ֜6$A]`Iڄ,hB]֣)GΞ/sy[F>r)Hkm}9EvgpS Tc3xMJĪCCn\I5RRpݸ%PPQ*]LB5hqWу\f}5D=WMDEs&~iқ-XeՊȌjOHSRbr=K$W$ M&S(%B7@]- BTBIOm t hg#:嬶S`<-ûDMV3Xqr;Є芒%}P̄,J3:$4[DJ!bP5+Pl4S.GaƑ>wJpI`fQ֏K2U#@C7D|8fbJT1JD )EbEC𮴁gMjeH *k*< mdC=W2x|#KgWyu}6+}J1 pQF6&@eAÙ1+ЀrgJB _K;C@ҘcGvlY((}}f>86h o-{oj^G!E3JfL!)^0yt4*N/R*]uMBZGT_)1μCrcryr~ySeBe-@cԤ̕, kIYD\d%*x9|Ea᜷M6T%%R$HU@ F@c Y^+;6;1nkXam56kzh(Y7/BZ{m(=e6%NIqs$z4l/I2 H1 X҇ /sOXr! +x82~g%/W_'6՜"{KW~横.0_cp({ 1>T+!۶A"SRRJ`75E63CTzDGKCk^eY^-/v"/c[Z"ojM֧\Ek+|/oq%_ʗ_~>W|1 |:Cۋ\sҮ}Y=ZޠRP؂;={{oW!򯧋ӫ[i?V8W;;^/hu`)Yjv(3iN%Bo+ADP6< 9E)}tPK 2DgqA+Dg!xupu'TZi1vC~O b#n5,{0{Z[PFGO[;(nsɁvMuHvZ:J^o&3k$k@huP5.ޭy*?}8on@%Lj"U\v1/I . RlޟW^>Q!/ަz}ޡړjw7n"īe~|sU톝 }]^̮a{%?;gss@5iV: ga!$ʺB[ǐIg\*U*b-PJ<2Awp sѸY!+u !z#3>/S}͖v.Gt%O.ӳUO8\UXxv$Cfv0D|4GY >1|Q~%<Z0: t a,vh`_z daE(݃Q,!rșn-U_Dƕ1y4 ˪@yVqf)` %- )l0S&fo"S':Moi:"CO;-+Sj$\##/$8g6O"0q L0" zT"a,s]x$E?|*e)*$VR4rJTEejVR,i*l"&eV@W28\M \CEBK2Ln~w?IkC"@B,#0Q(aYn)`TP&j!vFGJ3GUyB%΅ӭ(=MQ :%h #(~!t>+ZҵԂLo{hIخ֋uwsk8VX|*MޤBjU8 *%Kbzx f;|.%oUB o qǨ"s/M*@ =zfsV BmvT,! $Y2rB$ E*?śp*YlցHCt5JWۮ|Yd?:ywtr4gg_]f޿tdv0vߏ?)~-v?>2o.] uC/%8(jy.|g[T=P Śߟ>NQ.wxc 69m N6>؟8ęwC/^iɂ϶v0Œ\3P)O~.kJ+U3㺯y ? ?wETHPLZ}d9D/+ U IFs='BQNs_/nhdIx2 R)%7lA.rlbqe^V̮Fn<8׌O+ }Z8ZGE2/DfYܩ2, ^_?Xx/\HONFW~ c+g["Ar=41PDmu WDP8VtLGb3: "<+WB 00+ [S"jq2AYODEWj!6^ȹhCPZ% ^k?y’ Z, ?wmh}i Í{R e//ok\]1 _?>3Jh˘ЩpB2V%UxӖ5[ޱQ+]SΛ^Sg ĉ=\Cŗ]U_Dݛ]ԓQx,z[Z K΢o+}vOqcQ.ObOBE_;lw.golOHU}[7h{;Ji{vNڮL^ ,Ubprm=}gswP*z"Up?̫O#K@>QK_2>b>Ͷ?K9s.q\wt|Wpj$<ɠUN^WD o]9+륽LE˽gp; _F?NwUYX.ky 'k*b6Uͮj8~y )6 Buu4>\UPa2f>a'?-~;!&8} *?=n2-^YgͶ\&ʤZrxLxذ2L dzʶS2,#wd #.2^D'Dn.OģmJ|6$d%b*'lާcLn@&B>ڔ**>|Q 8!NҾ?P=m;)v}~\)Aq( uA#a҆Ŗwl̢ Y ,r^z3yWvYypTOֲx3qi>)9PNtNKC(͢>g8[Lnh gd^Kz7RqQZK VV{ 55MAHxhߛt#H[JqDŔE"B(MRE2RsZŪrNb,ex8.f7~˗|?!5[qҙ9G\@#GoCn8x F I맅87nQTws{r%-?иV ߨt@g4 N' 3t %f.zA#BRޗ >ODVr)!I$-g@^ɠI`RO%M ]E ۾|o1m Udb\Eͩ"kZY[6k~nTc Ma ޟ^KT}LZwܘqlC)%\J/~fۑi$?1@{+BAmhVEI<~&mM+oZ#4A)M"L^`DR!R+fq`)Iγ ^Bd+[R=NDDnx,謁Unr1В,%CEU.?_3&ÿ䣁oo~,k<7EO㏉CMu|̟~l6_ e~|^C{Pڨ|%9* q湂 I Z+۽eke j~>S]aDסI'-5 v]+l*c'E\!F/M駋G+,!׾9taYJߎZKv\S )3kWm%[* HafW̅ɤ\IWS̜rJD_D-x!ӅVʤeNӫekQkv;F-c#^fcLYcebllD 3>,^><V0iWvp$ GGdqcqS{=YؑW?>Bjm[|GG1 kdg`0b^Rw$LQߝg0vW,3сN$9ՇԐta5.؁5L ~wz6茌t;PQ؝wbq/AoWAnVr:0?R2>@wݼ)G;fhNՁ5pY~іy2&ZO%5SV")e{jq˜=Ma){ 9Kmy>;2]t,WڗzYzfb0*u^6ݜm_vn􌵖yc>m**AL\AW غ rۄ2YoxRBlC:5Ѥ' kwH7">D/Pw47k4r͇Y %8VHMe/ <݆IBΕT9Y'65e,%Φ% e Cu8a!C.U0ܱ7i{176"[5vq2_nFވ!4㨘b)-&O?*^XnonYcb&r7O&F,]ngX_'ݓ~mZ~@pcT_aQ6_}MɅH).-!jg~ T_78S\Ͼ sogqkB:D11ohV+miE0 LC@>d̼f'A24eJco5)˔,Y-)d̻KOuW?U]ݍj*~>Uϐh aF޺J{YY-S_ĤMM[:9uր|,r ~ܣLA . 3N_2u%lmc[^hNS4|XH^c? qĈD$ J Ca!gHD!dːaF'\ FhiyM[Y| Qƻ$L.i+9d'I]g )%9 ;m'qqUBZĀޤu3sF~()R5F؉.Wܞӡ *(YNjq Zv\jI3.S0[l좞5e쯂4nzꡛ^6DDӧJRغPʾ@}_uC\bAfj=E "Ԭ\rL@\:D71\ w|PQ9L8hh` y3O ;H-Pw\LخNR#ވ4@R3դ9"imdÊJhM+І䜀1Y (jLfh#? 4~tAi(u`a62V*kMiXɭEQ-F3^S❄$W:Y;b/Ad;Ke kLu!PQ:.O%K\+]I)W1`x~fӂ D5zϮiJ+T.'WG :gOR1+k()Hv4f FA#vF4;ORԵrb =Нj1e Śϋl/Q;@{ld9<ᠵ8nܬ`xAƘiŚ2,&l=mm/aMmv>B%%'ha^iѩLSp85`i/o#*LԃJ/&ղbsФAp|2rKTN5:GO %-rZp?$c)5>\a&[SƭY[`qq"V7L$ZA:-0$ko_>Txn;ORTzs3 1g뷑M!Rd~ cZf gAųzyu_1[y͚`[Zw2jDp3&+0xW4| מVNjMid*Y;S/A$Գ^D sC,kA\QѾ-|y xˇW':)AЩY~:֗w C;Gr'NYS)f ih- XDIOx z!fl5k\>좀*z8P">%v^PqKuo[ jD-@biNJxp^߾C!t*&'V0#{IN \i5p*4B;gu>+7 9)ajFՕEkʯF|)xkW DSaoת5ˈ斵3)B%#1뵀kda53T/iXmUv@e58Ky"][v;T|AwDaq`jjB5 vH=u~V5!F^&ׁtGWp̯FhROn&Tm=>QUva@S9hk緷='QY3mjЈ6`ZȔqBfEiѓ(d+F.*%yK+uԿ] :e1h^O`+qfu31sYe&Ӥ_Fzш2 RYhb+i$;6wbJI1%6X-(+;bF`1!XD ^(5 ėr^R÷vz;]m$稗뤄O D]?N֔? asT"t5<0-bo4Yr޾P,Fjrдe6M8 (%4+)3#{!vv^ۆv-U ?˼6/6m>3J蒰:JZ3[0Vt&K\%7uH74GK:6}1N1QkS'">Oҧa.x:4f6Õo׆DgBUf*nj%Λm#P˨CU|^!;\q.6 7q :a(kHHJ.Nv>:) "n.[pKiܑ+V#ɚ׭Ջ}՞:)f:̒+B~Aoa&}50!{4ǸD6$HF؇f5]rEFwbNoI7~UsSM^3!g\oO5eB.}(rVfjΟd}-5m/sIjay+iUյ׶vulX5 5 =Vդ=>PH;ސ6X~lDf36Wzk-0D)ϑIDjR[!x!')ƌ6Դx5f<{u*ufV`Or. &PmQ\}B9WGgxl^vh I'uLoooogv+ǯG08ϋ$__g_^؉?9x^JHc?4/3h|aS'ܱ"L6KoƌcTfs* 6 .J{z~`*_ǿ>~xspk<:)+~oz҃ Ñw[j4Efw[Q8:xw9؛ vfQ *^=4_纱n^U~z3i_~_}~tq JbX%R''R&\u%2HI!܊zD:peU{B$S _d"UP$J% M;3qh΄+}Ibr J~YYi橻mo) ߳nߟto>^jP'7{{77%P}7]Uu 7 ^/$Kdzb%-H8 8`ϖ'[3V@ i[ !1$r*{,uX:nxh%{!n1j=L.CWfLyL?<2Lj/U( CNϤ&Ai΍,Da'K.,.03Pn--MD"N8喞Pz ,Fd y8=牥!óp ,; s=7,q,zܲ,xLGBh~Ti^;!-qg m9W˗*\UQȟjUCݒ*%5zff X\Jk.[B<`:W4L:ϴs:WɨZF't /21ԕ;VT ItLԏ =:}!L{#OJRX$] B0T?NGZ8ՌCKCQ@q ;ߓ]՝(|u;3/J59LS(4kNtᏮZkX+K[2`}}0EnUpqwDhN');Ua` U1بf@m!bz•^ nA:We w]U1wY8(F 6tbˈGe&V mp7JǤxpZ=RVIQXR0,O)`4( U8@K F&1\i"R89BR1*-U*[x@MfMxN&*=ZU>AT&^-ksEvS*%YI+`Yb2WR*_F A`^+$WXK@ T 0)bpQ gKϊ&BPXI-+!sԌTɘyHy0?i2!l;lo,r"|c-1kkvFm]֦} ʇ0+ظ$I^A`cJ!OjMˈ MU-Pt*Glr#ҪbT٥շ%=ei"F(VuK{ٛ5h Ɯ*v2s\=g hQ *NS^9VZ#Sr$2`Ya"*v%glg{ >zUJ\ RR(DgZT'PJps"z]JHJ$Qyi< Z@ Dc|<6@2OQdYWuk+iAM`hzC]^mO>ϝ͇,.Tۆ?g꾺5__-7uNj_-} R/_r3®75_ ]>cN(JCΠfX,`/2%`i4&R01nV4A -ړL 1 )QK 8BTi@VgP3,!uRU 5-v$5j7_.n-Q.>.a؂Vb/tW^UsI;_)\"diAI)T8&1,Hhe)H*|V2dJJ D6zgD8ϓ,QFXJC_Sb$mPA--\mB"Cy!ds:%{@Ds!;r9)d`b!Ĥ+1ɉmI j"d`@ },->l)c)¬6C - q=jy[Z;hDS hwͨL3FX&w1T)Dpl;aP_Պ5K0WDsrQe \ PދRȒiOU4сK҂PIV6Cde%μeH R% x cFɒM٠m+2pm#׭nJtV\X2J{a'Q*+EHet8 Y3r(XYgwl5"miF+i6ஃR:k۲\"/X Ӗ4O֐RJ5mgLk1 [m#v! 3t+8x@dvZYFzSLPFtje=K\"\nz?(Umoߪi[sJCb{7EC-۶֨"wA ZvRݳnmj9 vg,UVF75A]q_,2,QRp-餘^->ui|is t__N?TۚФ5t[_jM@ćsIܵ shWm6͘7jwRMA8+pKg?ѯ=Pr :-?Bo=r֜pwb0p?kQ~ўFܑ^ÜÈ5G+u>/A]Xp;N BO=HK2 <gjKxE;S<7Ej-[4l[d65(<"_ }ڐu`vCz ;s20BX77Yܐ RU+Mb?H#p'z={sa ֥`|: Z 9 ʗޒU>r_}+;yhx恵R8Y ;d kyx3 Om|Yǰy܏T17*kqa3Rr%^GwkA?Mx$T6#§,8)V}85`_e"<*OHOd\k6$PHR•Pڤ\x2k_{WX(s`pB y&es'?x2Ƭ !GY)2(^+'L)קdXVμ\߯li-E'Ik:7XdI1=ϟh>rY7*0gXn_隄RGUI{8RyƬNWvZTT{^h; F>8Ɯa+{H|^_ΣgްWgMIϔOFëPBߛ8?s7Pؕ*᭯ l%ERMT$JF;AaJ)X䢒1K ,A)"&!BB6nSxt1&/2rez@ U頴)K=UҥWF/Hhm}?Tan6 IR{IK:6??6 u+::3Ɗw9+wjK țֵ0~l[#zX a5z%T^𹻹Mϝvf|1Ziq`U gJw36,/x o,/P8#ZJyKmOxP!dqK?bFbAj4u=Y`=OvW_J^JٓӎOߌƣɇS4X 1,ǻ^ xc2^fAgD1j@)ûڃ.6h>Hڈڛ-;!7;Tzv71R?,&r=u1]p'?ua d<{T:P#z:γs?`,j@ӴwG˞~|`|]lBO?Wheӳk{{@"UFX, W94^y=̿{cVτ:_k[DPZaۖ;[|25Fd( *tzK A{vO#dׅ,2Ƴƛ-<3q{qϯ`f3:;\(э5t/,eL2 ^ .FkJF).e6 ot 6M!*C&Ga#89_|=x{{;Ƭ~e0wA_T o῍; Ξ&MgL?ĕ9b->Z?bKZX|vXLqI?mN2/SڛMeofj}I`,{%y e% 4IYL&DͧF{#@ر%r<EP79Cir MUʖP<ETœyeA(Ih.ppɃ$se,(gvն`йZ} j ˝Ǡ+R/Gl߼Jz < V*3V9;H1syN錰?mo[4|HD_{%Ao/5tgо@2t1:R0!L!J %!vkB[A1zF8=D> Ef0!=[%ʒCYry...W:'ڡv4h4ò汙:.7zXV\U{\7-;if]cƍ/.j$X?9uW\ EVeky(cjWR7^YRR_6ʇZ 3) (qQT*ALgD2ޣޱ%ⴣuJNKj{^ ۷n1if)r, gyܡՈ3.xE_5AxTՂGM #/1Z)RjZ@(񪍔>) nXr.g]4*C,Sm,CQLQYK9i4+Aq uy>`YΔA(øQlO0;!+6QI0#т2jG,i@'Pp2;oKB9Y$B0N%ؖ@acEo61"H6&(m*/ű)l{Mߚ p1;NG5:\((\5p6R*0,9mGRkR,DdjU 'Is{wˠ9k#uJ^*%c+"q.u ҅NB: MzVsOhQ h. - ЃQ4s 8 Dn$]^O>m*ONocGd`)ܙ$w8TF1 I}ρZP-5UJmSThs<\;ِFF+E:QA$E~A@@;BYV1~jT ө^\\>,sC8xɡ tsap5ş0aww{?05?\L]9#c~Agza^||q<T?N\QvTnaһZ07N>>~>6řf RJO:SXTXYzI+62O>d%ڭ-)M i OES[E2޻Ҩ=~|68PO)Nܨv7*RqX]qX9"6q`Als*D1BrUTZ&ܸ"C5N'NZ H2z*/ e]y&1 m~ jt8tOvKs ddpaRh7Whu0&;eHj5$)>"TNyHQ32`+53hd FH⇓=+}]# 52 N1i /rߤʱ:ww])'(YlVĿ]#y"I )*Q^-T:jۏ}?Mot"Y˳E+jxКnS$ٗ*oz@Jm 8״IFHO}6:sLdiE\lfk_Eoc gB_)'2x?C/? (:"#J0"V7 na|;Eɘb_潀'G"vu^2)CC 1C\,MV Vpf=5;?B4?V/> sEWV8^:HOE葴] Gu}oӵ9O!C׈-r-U)OavjwLQC x%:JuU <[8 ZSHSRdL &~.Tg >UCߐG QqQ5!UG1lۂZ0FݳOPK `wS>S\VJy6pMEIb\P'QWocc?҆\UmbրܷEW{%Khc6G=wj SrTwUwsŊE}\I7*0%CIZMJ>ܮ߉t[.`nUp. 0&BІ Aθ|ڙG5>W6- xvp9my g}$w<4;+Tc/<;,\iu,qOvDva?`lX_6y*%hs1*HgF,tq#p)ˋuL 8f3tA,9Ni2Il8! e̿3l,mT0scu^=t_ؔF )PTL*Mܻ[l?vK~)E5fXL g^54W/*6Yǘj)H{N8/x1LuPva[Bf'vp]/L<>V[˜6q Z$URm7J΁wM۵JUms7-ZlzJ:` }Ipz%XsFaP(E|a_V>ֹۘZQ)C?~U0<-!¼B%a=IsB r>e_cv<㤙zˏ8zl|<17ehGvn3$go4 Irly G{XM6̑qklF=P?ۺOnUflئqDJ $(9ndE 9ۧMabm tׇJią#~fڅwFp톛UNPtTl֧X{Fe22C'1LђM<=Ru!<BrTOf=M4\8љN.`/N P.0mp|!  hsQvs' SkT#ϯsw eh$?F9zeqdWhr \8-ת}-eŋC2z("?fбE.nzWGzg딛+>aYHȍ-'[+|JJzu3.K,+Hy3rzi$7 #i:xz:T'j;{wt{鏨|_YoJW/#ie8$FoM:v1|4قſѾ|>GR';YqycY`ɵήaGƧlZ~}p.-Lcք4M8aM8MؾtK#χDҋZyb*E-DzL-ᐤS8x^:z ޒxQgWpl Nnwo⽝]@bDs~|f$SWiZR嫨tk*0A竰sJEd#Ɓ@q؄~oЙo|ȝ'^ չ cUs)/8DƙwAo;i*]),*3KVrt0,aX7I%ʉ:E*GTQ(;ƬOsXQuSe3 h]n/qSvg5@^Nq)V'mUʡ&zZ 3) 8W,\i2AUeL[2Ë+{ͺf'I` ~z߿!.62΢`cs{tfw??GmpdzuWZ00(lgqv|e|$rMQR/F (᧝I9*p r#ޖ9\J|PDqnG'hJ|M}4BOQW hVW:o3tױ}^llKZDEC9,}ygi(xqoa+b:&ﯾ޿{rA~_m⿰7wWulX  !o?{&vLHvէn-2/K5d#9D.}q۠jc {97zFJ%L ME\1;z44z%ZHiFh\HۣWȜN԰zDJRy:?2\i,'f=Ѹg !"1Er>ƌ3 DY4CkqIjvc:vPU'փ ,06")ѳnAA4H*d LL0ÖmJ 8ٻ6r$~YW#< &v7d32&k$9`-nɒ[D0VXbUU=~>C z>qF5]1칡If6O %{7WU*Te[JrKHEM\>3&E4Vws{~ipi5WPrFB6D?=e P Y6+.CQܫ}=?} &bذt3_߈w30sQ+}&4'<4 eE bhC9[E=hcAk*9u P~OϒhGJDZ^WJ4Ҕ2]d7)6Pӊ?[zZ10^wFU3&@8k+nE(g2#cd$S̢OwdfL%LC >E3M!BJ0ڋBp%2f9ƝQņ1Lf1^ `oO6=g Ftm1nMQeBwN$c #K vT2BH3=#D78@ăWBB [hxQ+"-c<9XU^):Ky`LYkXxQ&<ڒVXR E;O9V_ITE+S""h7PdNj m$NJ'v۰C[;kҔ¿]%|||ȳqrs_дDdSdY/'ZVBħpVYDE'ZRt:@wNp>GpjP bTj])ŕuރ4G̫“ĮJQ¶G:ut9Hz+oEe-5wj7u iq ;J\  fcB3h̗K} `]Fh\pwnxkVngv-n ys;n[ =L]f^]&W?oa,^ߓW{qOf@H<ag+d<=W剈3@Ct&:K#WI-,Wur2VO8?G11͜,v7^YR9B@dz%R&A[s/fNo]Z%gK R)<ٴb4F6S[E}4&W|$> i9u~ jh7w9'%?WEƊHNy:ui 5<,-K,r8 `!ЍF}6U`(X)U!WT= +T H82 KMg*AI5a#Ht&We)hxE`0brݽWwT^w0O&WK=lM߲g39&(sG-&!9mtE1J+wW5XUh=z i_zozg;w&bM$&@!ApGD8W*clKGE c)lq7-[T;m@+%ɗ"P.+O^>8ФRJY*3<:*ІL tC2.Lg73l mWxfAR$556RU4EzA*Ђ PmU1,_x4V Q7( \,-*B,4ܥqҲ<]m=JESv<M::6s߶@(Cv]0N:Fl=2唍q'#Ab0:c`Zc}r͍̈X^뮺ۥKd[ҥm;tm 䊓 ˼0 ¤G !6dӃ7Z``8D_]<=LM5QoOy{ ʜy`Whq@:dHd82$dp h(0kJ )bw[;[㏣V,粿y.J㻗WJ7:[GVˢ}Ur옻// 0Å[˰#mGkv$=|b&#Q&fwZ.<8fqߴ32Vc6l?2V>8\w'-P0d[s ={NOw߼O7%$T{c)J;lTq 8{vYBa'F҆3ðw8"K F [9#PzؕH@rXgɸwt6jJ=$gΛ]bo4- gni"5}#epM@EJK-`3S@ +h- e!yHٻXiNbЖ"[TZ缯aӵ +R\):)/ W/Q?puf4PWwiJY㈗7?/{z:/WǷu5Ӧ3# jDRs۠\>f;%ׁ8B'Kr{uzv9Ӡ (+'}|%s7Gøi_JJIX҄Ib%U(+U4Ά)Ԡ :GI )DIvfAܮH&T!A۠D5EB?l]٠ J)N"a02qg9R'XrZws{~iŹ<4m"o4~a%½v:8^FicܣrqPoV'?}ΣfWQ"F2 p5AuWKzKL: p5vWRO2$CN1\cr5g)De70u+iJ-/e+f28T6@ԸZ#D= :T6@hw(ּ`( Ʃ-YW$z0 S!aQ4ET7.tsFt7q@'N7-FH%oyo_,?uFDn'7- 6"qCȘh6 -\XfHm,t]lP'z#!E}uwTO_>Ǭ~2F@ۣl': xW?/1=8_ :bWu'&(^Т4.ɘ7V8]lHfs82H}oCHÁ&n.O =H$GMV6FUV %vhXbk2B+wZ-"]Q>>~mO"! !KjD#^[lNH{dz$l>kwTRtr#a;#Zqn;&B@wJ2EH0`z^ړ&LP3$J:T6 5Z#{ET= jECq?䤖E$lv1qע|_ ΃;yΣwo@u]h++bYpVyU܆2 \q$ľۥrゕQ"mNKMr$w'˾Kr_'0fgq,NbI8iY4o}i)yjTX0iM5IeNJ"$m(Wn[e(ЍCy13%&) <1τwYZ#v&Xj9%Se دD~vBysj_Y2վ^ޖ>Ns@+tިZ5DTik!$=x$ɩpIZp\vr::Ţ~S1OK/(^d'~?6;yu'eGDvr:K_=1ꠗ~Ia!K875_GE,:' ǴfKe`j1*SœxxOO@D uɀ98ԘɰRB7,k6H dk(/Iz܆s_8qwS8bA`Ccz:Hg3sd2/+ϸx&ϙ15_TZʜ+*` 29x+r|QX%nn?G:>~QuNL30&3j&G?xI?;hyjLT>=~8y&q&쟾o~R=q9BKLh̹#&b[VtєUabf>?W"jD=뻓RpܑW= Aÿ,"ϊcb{uY3 q?+Npr.5OX~_'0 wޕU4镢9vsG%jZCWr3[Y$YtA?([ =rgz=[uiTؤ5΃>@ܼiH?v7k_6 ޘ8sC+lr;-Tֿiv19z=Lѯe' xO?C#є6tf[?ᱍnh^p\7 etO'gs !O]~82S+d ḲS)q3ZSEe 2g,Bi |*^w|[x>L%CJNdYG}T%5'V\(JqqXjH3fGWH ?M*낫)Ӊ ˮfҝXz Q5׽A?mžǼkb+%a1e}X[X6a֣MJL- Z@ꊉ~ P ZG94~=l8FNDKA7UZ-yNC$:Z1$iö[]).|B'1k@7ȷtP,h-tVAo#,t ƽ~PK0M*:(65׍J2N$_d0[w!BaZ씄6/Nzc=Ђr~yIBCåjvNc k'7XwRLz=#JbID;ĵd8E]R9U@VY#ӕ>Η8_N|99_!&XNLXf=;yD7XLqN)MP^Ʒ7yO20"kقWsH8H{W{/(5UH!ya w7=,pZ;dcj.&DPCL"Ykj5 hnI"IE<6RlS_bC8yzWgv״p!gE g3MCT:^tBȾ(g@^d\IacZٚGWjkd+bLQC0 ٽ={zp׋Wdw:1kIq#HBؑ(,;8c8Q﬜stCiAVvAIM- H08 ]{LS1q]ӈ]tOp1> pb`bDQ:479ꐪ (b|# zR鄟 eu~eOЌ鵉!ǿ},PAVU[APǿDY;ոZ7\A?J=vO6H;"8spfE_%5>=v7q7nH.7#賎.g _GEjmc,rFfukPn:/}KJ#H AVh+&y])myE X4^g ]#!WI3?ӴW9N7|R+>M%+LRUKbH%+'Sө>C)R4gl,eަc*5RbL߮ \hR2mړ:T̃?c( NCyVpɈr , *uj'bP@\I 9:¸`UzN5L7ڳۋ߫IQDwyҬeU%RȺp~[Ma,)͚{>fv}+E -Kgn{{5ፊö vw{qodKZu+-m<.=m(6Vf֮8%Cn_ IE rԓtgp?Y)B֗-Ro/4~!eԛμ( .VQL 4>Y)c,9x5Q>*` +fR6Jj1L8_XUs,kJOKJ]ـPJ@&[ݡSjicԕ@(ٳc01ҰgJcR)|X"6Ikfػew}.26}SP^پB򅵹79Hr!aο#DžG] Gb " (X4eY*0eƈes%W*tL2o1D*j" ;) "9@ۺGD-ϵQ'H8*%GҴFp 1Ud7<Z}iW]ztfћqueg\-ImD}1.hַ0k͐wrY)%(iep:8鰲5lp6MWA/jm0%1!:q,c4K 0B.zsL7FuRdE1^’4K6p_w5%GNHQ9[2t!.<4ܷpj 1?NS^aME)$M DߨNm+24\ioq5bXT5l'Z'T*vMZ#):v\߷AJJ#@";$4~YI vH/.E/ /fM_8:bט1TSS_~q`L5L@QA6eT.,B*A6Vmbcpm5h59~v"`G kT!D= cL?/m6J@Y7-X$#Ɯ9,e;>N^cO HsՎ\9H<9j9!Ǧ 8ޟ}s>lCǨ׸b쀶wDwRF^M/zƋ\1Tiض~f 9V0mgZ4P^}rĪ Bm=.g(9URF!hb)0TP0, I[P("akGsZmJ6d7SE8#{y3Y m?JDjj« dE\OԈެm͕ ̆H!x+DXEa T a"K-|FXq [y#,40nGǁ c$ \JZ5KFJFDʊ+;gRɈ^ittP+Ʈ٧"qDkSd8 5Dc$"1hLH2f9bL<pRj؋|"@kd4AD_H0m%#.u{̵@oՌ=G:s<KYBZU6y<. b+C%u+*16T!f& \G[HeP!$Ʌ<ޙeQ*28*kPۦVLJ~o ,A8$*WKol1 ҁTaH2_.б}D|P%"7Qb0}.Z6r#Bjyd!^TXLr BX8ĚDfnA 20'`!R5ޖվ@AR0bbEDbK 3G H2JCI-3,(ĺ]%!TG'/ DtkXG`]5@:TR bhE|.(mk~~G f x0,YКlXWfBM: +fIGQd5PXJ3'; 4fBk2Q]W =Y҈VVuPom-!1U7r)BսQKN)s?YxwJ ָµ5C6ap76Pǡ6"P¶ ۖǁ`۫7 <]ncOk7#d2Ty[x~2/lhTãfh1V[8z&INS_1_?+؏Яm'G+]zyNqCN.V0]x8S9"qoOzR X蓧JW5UqBRKrȓXRujSl81EyrCP,rj-3&@bg*#}dcˊ 0~)Ю7nܙx.Y^f\׻m}P}>+Jй㰪.b}6rv~.V]WonohcOK/ e Ͳ}@k=d?@u™zHBCk [ >EպtbGYo6x`pL5W;* k(^~x g筬kЛBr,Ԩ9u /{Ɍ^ wr oʌ^ i5 kK7d<*UsTrGw 9nH'c%sthțYmz2稐K_àY``ls ~?0ʑ Bٻ$ z_BtiH:P+8[w )/;Bqhp]+8Q^vK\( &>\U<-Z)}8 88ɾt?)+JL 2<EH\ rX>lT8]aD|ՔetDM׫KDbÀ|zAYk~{)Nz/n6K?ݺIpss\600r?#q\ӠW 8nkzDf엉C!-wUy4zi ;`0 0h: 0,Ѵ?Inm 8D97T30s.o+ ͇'m-^;>6}{ի߯oR_ddom22;];~xi/Hn]\;YБtrRR#3\{_3 B.^Iȅqiٻ0.̊>2n .Pڝۯ(GRJnA6ANy£(Höt΂O*j0 N Pc z2XDgtqс~? 7@+,z6vtӖ~b2⍽?ki $}2Rغ8IaƷ0^2I`Āp zr(3њ /3z7?$de=LL2@Ry >/Zy>=iU?r0J> Bq1 '`ҍfή߾} fR~I5L8:8lB^e\i.5 @~ISdV.aɼZŠω8U]\U2 &rg%3XTpNS7",mpAS2e9 ?Kgƒfbϼbu?/ABM~lM|{_sԢwk{A'>N.aMvRC\C]{3q_h}@/-)^hmlAͨgo>bmB!n29by=_@oK$SlHZI6qD2ќ:/Ƚ|C,V&% L;,lPΫp+ &x?瘮ZG‡Bivq:rgܧ!P#,OU=^{n|<'Dž8)]ߊAB^@ {d縫VH|iRpFdvYƍz2)FR;GwC9nX#1K6'xٓZ63閛g|'2)|, q2$evtQ+RIe;㉍O\:<|kw>ϵ;:>Ϭ %B#cH!ebBJ_Y( \1~svvn '}3f8KPn$L-$d>L6[k4$ҍcq '(MA]l1CoRƤNf6A %qxVfB2 Q/7Z)~(8TP *K[%yAEM> Pha7K %Za5jR .d6rA ݃$oՍqusI\{T=s(t>7%BCQ.CҢ}{a[9S#t['fcrT>~_"8-Wao^?~~DE/CyejWAg?B|Hh%oHϯP.|X0zwС.oon .ٵd jt._ 4 Ip ht5,=&k<\DCnȵ- ZDpwנRMVUwo{K{4 { ^NVIB Lyި@_vNJ9he!YbhAUVfZS[#TM'[C!ԡϫQ`UЃfXO&|e`" [ןWT)CGԿSIN"FVbhҙZ9Ӻ"V5 w QE jw SAoy\LUҳ|p'K%8U\Uoŋ嫃i5X{cM^P96"I+0ۺ{\ 0)ÖݹYb? 8:dWfhB$SuM8U/!ZëF GՈL@j/\? r#ퟏvGlu3t }^G>#Ԣc_ԴQ8qsUzCS8|8P(^24<嶔9].ə2("qKgSjj7ҪV65N%-忈Qmٮ0 PfH{w}_~z?m<1 =4;~ɵ){Cƺ~""3]cAK#q&Hñ2d$ٌ`:o Reg{g."c\L1(.n[D{yY(,[3;;Cǐ\v EH]CsKST$XF} Q(kbem}6t(Ht.W-PY8裯󥚯T%Dw~-VH CY ȳ.A d]fmbtWV q!y; fXx_[\iE=Z*QFy$fWgh5Mzl;vn CZWTW7,+xg^k<΢Y7^΄_X-rJб'YDaQ dw~Eueӆj#Uc:]{i'ՈyZH.xɨX׸Մ4ui/)Rdiw } Z-:zvm&'3F8iى$ZSj`[3{gO'>~|b\ZLDnM52=/hJ$RB:đyY)-J9fj_|Vn<; vlqˬ./NW 8?k22YN>ܺ%=E~mW/7Wh'eM<͇!O5?lU`no[=ڴIie+s}az{?rq |"F4e[ڍqFb#:mDrev_ڭDc[ ELi*љetvT>DpJ¡۟aHB]z=<8pOui5LE"]uʉ52ue)HkM~Cф74jbv ]di}7G߬]~۰zRݢH7z/?fw~~l3R!B M/˄&+r3C{sO9N& ._VkǙɢ8s`Q-KEqG6浬ºR+yW~#j0RQҗ(`:/2-g3Vz+]zrlLˢ浳g;FښQ'64Iʮe=0GN>ǜ v?Ѣs `WܩV~^TwQ(y3y&]]j$*g̾* IYd2L}{QU[{>PͨK4v<|"_OU${TN'wI4zDՋӊ6 Kֲi-TtaENB8["͟~" r;:E.|]zt0ߊH5 mF|Pv{ܬp#z [Ƙٞw<2jߊ׺xN&ndh&(}'!HЯ2''UE0'dH=^;T cHZuK"KXa\50Z١Q:k[Ъu2Y)oHя7<ׯ& ^.g9رFuMpv?^,(ShSL4W-JkqɁNJUt>Bm $>'WS0s+HTFnD6K# 0F!ᬡc!aRwg   d()i6UI [yR(is1wN_pdIXe [7Z##Dd~(a󲭘L4$C1'ofrgu齭lNUw:$As_uά"n'53LkvISM|d;h. +uV[+54z5em9֭Ձ [&3lm5𜳺fPG2H/u%ᵆ*+*$#U>7?7Mi%6Mo'F 9=+N1*i6 j؇Z$P Z%hʠQS R򆛢>(M>sYpCx?'9,!1\B]75[h?7&OLzk^6&s _v0)gGZ $6¸䅲 n Kt+uJ b:pɅݒ%i t5/nURktP7dcek X}R"N;ET{Qբ\U *C'?v <VXضLxNy Ȃ'9a}-J_mػn19ҖL7+09sav{(t)}q-^/"04#YB݌g6T H˽уami.R G*^WFP>hm$IH#%wtn_u()5 P*9MRy?&k1m a{c5Ҝ>8"EL^O8[>[Vp/$vSkjݐ~vx؊nXO/2keSd>ܵbmW,0XcN.P3{кko"hRD֗X;a R, V+_p#o:=$V .fԾi y-0SӚr˄~Bja~v)K;͈t؅o 76B!O?~z9D .@'&'%_E80Zi0F#X%h`Ki RXe!\ӄ{E{oO;[eK6+ri\'%[ģ1!h[OR&%&嬲ŚVQT33KY6sud[ohO1;xzi,l:CdjOuz#LNX 7 x^D;֏;i]e)Aya]yGw91qݵp-' Ttxժ6ҙV[]n Gty[dc;ń-Nx-VHM&*\ũظւj}rLҝ6g5RE7'sձꩇK0D9GuZ\Hk|ލ!M;CN:sO :lv_|~ nmq}#hl*ʐ4>{=}ӦT`H粀7) u9O*Wo>/ֿʴFE[9?)9{ZJYb+ki(gYKŽ9`&k.qӜ*5۱v܀sqX|%~~^qdelžq)̕vꄗ_/|Xc7hl:/3{ :)O3F\|ɲe;iJ6gHRNJv!1k_d _ÏZ 7pG2|ŘVL葃SƎ 7iC<_ю~7j>oGv.0ݚiF;/ [7rgiy ~/^s! };uepľ 1Y.~yEm!YC(/ KpXmB|py6ً24aV[هc@%'oaS1h[R6)0cu(()7J63a(:6=e(A0M_V<P( i-]5Pb~ԋ1u%CY?~CYԙz:QG罡0.(@_W PRF)n :ۮ@g]>(\n|vyBqq])ٯZS\[yq*Evy%iT)O 8`<(H`ך"'CFژ<x:M o2V;M6hpqucr}v+ʶÑ˻ Ѿnanc؝;f~w:MvZ.6ZȯWp1## K >9 $i wek3~nAi o0ߜ6cf<`̍n_ \ʘ>(\ $슟#jWO Ps\JZWI;3@fZ2~< qC x y0UXYY΁MlZ> 軞Xq|DP.? 9Pxbm/rqVv [5  z?l†S2&`K H^( r%qÔ m)™I(-{LjLw8V nfR4`e)$L-W{NCQB`1RI%o܁ jVU`Xcgq' QDE HTN@fSvPޭUدNY=m=&n`A?i$Bsvg)w`wY]pq/ CTư0JqXVD"Y $V+XB/#&8@,,oWpL'&p93YAHctb3<4` TQf`UNT!a--'&|9r+cB,B%eK^d+qE]b L8S D-)mP)UU\BSW9Β 4djZ F@bj`A9jʥ2U2 UZ "N"m/0UB*$QJ>B+Bc+7qZ+zlSGB,)'BuaE{OSkHhrVP l,c'q2N@b@MzR@(Ɏ,LS/U{:0SeHUL„Ce&4TDd$8pDRH hY'EG &թY :Άݩ9D"B* P)TJPj1T+) p)CJC@d"+aq`͑%.Ie4i(33 Ae'jzT;ɁJXV+OY8V?:o)⠤ yd:Ho*:ȋ9j@*KO}2YSM>\7{d|)07F0º}0 %@>) GX]ڴǽ*=xs|fT.b9Tucr$uiw(pzeZzSJ8X nhbx=vtz,GOTnoqNԓbI E{{%_H -ID:=#PO$I;z !@ɛ7혼i'Ҩgm Jֶo;K Py:svl7 AYWNjFuOҼaNt4%ksw]?s6 \=SPEu 8+ZpCY]=# `|woAޏ.q_c ORr&$SIq1@$V3F ,@`idYl3g)$JpTM%'YcքĔ!1#4a)*1,P"Q @( VT1N3j!|0"=`$E)DR+ H"4xu\'$ʀLK"g R( >IJwzez{qIpIHf5Bsjm6/>)b^YǾc{K lVZ͓ 39Ő{OX ɔ)JB)0""ylm bRf">861۩jfMr8QQs!"}nSǮJ'ħWX|b-rwb)&2'r$ 8|)QHO:7=DקjqӲCtM)eVΞ!j~gQBDJs#zj8{v8})X0.=Z5mIʾ/KM4ڠOۡ[@N!d '>}y߭d颾5Φod-^#7dao^J7@z, Ǩ 1>R&Pa{ʾHcOe0Gޓfd}Ĕ@9q̨Xi[xr̩ /7AfNMЪUA뷞R J =K3zv'^%G'Z'jقB)2$ =#N2$ +};  `Ѡb4Îc<. vl_RP'vŻG}'T|r;Ȁs^v ;UaWh(zH ;x$-JK*u T@+rq1)-U1F2@.pV}m_||n"s'Nm9yuJdhBF{@n%%Na S(*]iԔMʟ`_6[ xavRHډ~䈻xZp4y0q1Ѕ}VE|pP8(p7 tTwW(t^ո]pX@;eRXFk{ ˆשRsG-ղ˝|6uz@1@HކXqbmr)[2{&dJG8Q8ɐze\s?;3߹xG!$8>ZlӃ8v#TK{ -g=6B$}8nV1{2030z42ls56 x͍cȴb0UيuÖg8QڅЅqJۺlLi7M;2aZ1]ks85 -Ʈ]0]˟nr1ڏ]ÐA({HjkV:!B(.b[=,Gbv?ڔ:2ÝKnFbf1f <Ν[SpdzIΩH )Xc͋J`?M"]$lw_Gr՛k)%6NwukT a+ɮ<ӿ1bf7z|SINj|JW*2؎fVs0njS.[;l10:hu9>gpws1CSdXaszλ\ H6Xe%5z&\+ ZӅwmd:> 뺸rS_KWx`H6yC@.qjsuD C4}='WmC@]EOLoA%1g:3i5Z+8(16NSLvwl>*ńu*|I*ֽ" A֋.Z^`z~x5n(E!pZ/f\jVDgp^rrP~k}zw;Z2zt5 }:]u! naNh(xeCR%m]>w }F+- mJļ/[=t)9 Pay>@(\Bi`%l=☆ ZKFy u~eN+PO[buAZJEb6Ik3aJ[vK%Awm=wkWX6(؋{sw]AFf,0!s[^5E?t``ܹS>f DAחP\%{gtb6Qa.BH$a$H%8E2 8cLw=q!EDזM ))c2`NX]N;w/x@UmC.H5ǀ)I)c>|h&~FKⰜЀmj`5\'!3P{ 5 2多`fqkN8hq_.>SʢسRfFt l` .~7aěc9!)4_fr}#^IotZh$>B|ɹ|c]l.%>*X~|@2;:=/F]6؇磇iIV?l(Gxij4z+fupnFlj28Xm'$>r{BO_'/:꫗ǽ {yT., 67&Pቀu /#2bN~U\PO `@lkB [*[sy;ɇr$ͻnon#/ļKWU\ 1WwRn Ѿp5,\@:j([!v}_gg>J(#1IE \/HC tJy,$c,>'f&@ƅl) Dxş}BHޖ;ce ط'nH`"x滏(;(= k) $\#S(`xGv{<j.MT$D_.A8f |AUtkgrH:ςĂb,,!Qʃ4 4ADpDd2cu@%W[@&[`x?q-gNBIT(9kG Q(R[EBRFd҇%QuR.kG;D0$H @l}.f$H܋{߁nvr5@Pflfsѷ;Ј v'=(Oö )]_>S6s> 2ٗK'pDhO B!r^{1I@ CA(}L׵.W/zX| no~6Oӯ"_Dna$942CVxF3@)KIrCLl`},cú=N |w.uw Q9!UJLA$F+aH3 I $#vӴB"r" o~zdMz{`7cwtٯibLT0LApƄ1facLC XLh3$`Zq!,$ R$SELAL3xj/i12-G~˕?fr,?v#vk-v2)HORFDZTTzmMz^a,'y  oY$ ZVS}DC iz (+QJ_V@׳/rOW ui3a6x۠iOk7o<|OU (p򠞴N !7WܕF`_{UO; rZWGjfR'dl:Βi.׸~WΓ xT|]H{ڝko Vq%L }pV}dm_6C"|3T3Tґ$](x2c8@N<:%2Ĉ,TW9s9tpr_6[wŽC%uONbܡQ '].QL+GA_ɩ#ӅJM]WvD|L|RI3[7ο~8,k)؋{swϿHC41Xƒ$xB\.6PӺ&ɬYސ{s(pd8e{E{j= ]\zIkL޹y=Ec^1-"ع m{Ea1~!9&Hq{4RܓSnS?~z0E=ss4?Z "42 Q0,ITq̡Qf3yIWڮJY9vnL]C!q!@q+#Z\CE6aSuЈihk]#Rvmd`F}Q(@)$`RN6DP!>lqʁt/1V"NE%NK9T[j EVY| }d;}q4^<~BnozK8bsU4$n4mne~F uƫf; vZ)m?@RA5f9:O!u?Ϋ߈ȈW_|0%v\ѧH &0S$'pyӵ\`$ v4zf`6tQյ91\Mw:6VwldA %7K f"pe|^^ (k7fv0"XSDmg3SgU?n>憤7%ismLR. cdꙔ~VCJҴ+_<5_h;z{y9:HК贫#`ww5e ^QD BZ0F!"ETjke:L$"L Θ8 ja$Hy:^&| 0 "^+ԥUkAC(i5ӝkIvo[{`qA!KEŮ;zZ ';|2!c(nG*`1/~`4"U$\D&_bGv(quhX8@vxnluG#Axљʭ[q;\'Df*'%]P+O I"NjɡTw}7|N!va06_]=;#k2ohe{+ܤ6̔FQhWFЯN.Ы4i¨ZVB|*SlF]ɂF9S{d{+K 3JaxɷVVuCN6Ws=^*.F't2\6y#lϺ"OUE]7FevӈI @sG8$ڑ NB!T8MI+Fl/([loZd7ܽbɓF(O )4h>;M]z߲L7{vrat[VyRI5;Wحeg9W0͹¾ Y%r() v=oCwBTT&(Kv+;D8D $ee a]xjs'CkH籝bYwq:=` Òg+>0U@XM\ DS[ *4;M({<ôN1ŋDPXqVHz3ݨ-.APP! ^L=>:=c$$ !3BC@Ry: G}(iϰG0AdyDBF&]L^qvL#~ŰtLy:me~<؛NL5(P uq߷LǺߺf}ِ (iԇ `cJ(V:Vz~?3WoNpw~ü[K/hc>Mn$5Zi3q'F;9TP)P|(Ψ7 .9o6H~8;e/tZm&Y#OOɝ֞aqDݍhʘ9F[uKz8Ɇpʼzg.;r[oH{ȯt6if_xh4E },zRuM UI7;9؍I#&W?C8?Vw>Bk.:Lo7:yp6i =i޺^88YջV=2i 8dnuuǍF9o^wZ6ԍ9- '{o.ih0,;P.J=n;k~_vA&}~ۿ n)zt'd 4٨W1{A ɦx?_.kHAMm0 2|ԾN?sOkpσU.)Dy _f6[l* hrNMVѪFVPXd2t ϸKïÃbN7H[eaW9n*V\lPw!2a-&}Vkd T,XDʿ(`wP~'D*'- Xޓ.f0!Z':RkLmgb+tDB[>@4B>O0 6/Iڕ`K͞퇯 A׾^X+.P@j½H0ӑÁ8?̫a)HpRd6q!GijL"0KZ8 |R"mJF;,[ , x .Yv`v|qiHi(r(C"7t$"P\l.2QKT0^Zqg'Dy.d-6Cbۻ^c[A2S [,#e/ϞDA [ar LҘX@eL<'?TƑ1〲\SmO]fw;| "H(YC!B '`,p$uAˆ< &@.qu6XZ5(`nzv4޻H>="(B<ӛeP%g筕ۇ68Πҁ{KW-)c*qK$i4 D]'799*ne*Ŧ*K[[wi8`Lm,um~ gx=UfpF:-S*5s]&6vyMB''*l6'cMYh(!-=@e<*qi>E,Y^Y2Lb|(^,߂̑60R.[]WS*cКSL he[%%C&*6$og>.a\:j 0dg0daf Þeɺi3J'ԑ-r7M!w&Q$?RN|!2!D4R yrC"5Y^p0\.C-1ZP)*FEy*poIf#wƑ{Nkds۱!ԊWЕ76bK]|s[9F ӄIu =y 1a^$.La~{+=j4 6{0mD,Tl"aȄĉhL).L(gb0Ϡ baϊ# IٗO1ۀ i,y¥xSkch/>_':pzV` O?oQS/^<+O,hx5CDݸĵa_eb7A޸hxsK;zOY31xnf*Lu]-/{bIqֽr4AoZrJu9n|-; 00wqR:OpA3Gx$u\]3KӊB(B+W3P@e8$BjdID(HRA SmɁK-D)s_lJIRnhKoZi7gdcY={OΔo :m(u0\g"*ulT%n_)ռ*Lb Ax3r%\ _G^X}FtqˈEdjq-U&lZV==Xm=ڄW 彍+CzZ]l,ˆHZ{Vm4 FFbwoS-MbQPo>o,`m@ nntTԔ׍x2IYo1 &i&r[|m7*FMmY5%]vjkn7mm.ƚطJ(fWBKof(ZOyY.[ߔPf1 G[bh޺s5;(eͪ VKfn, ʐhسmK4BJ!䖕8vk+Ap4¤hB) 9l(&`aEF~u\r]_Չ[6 i7 q<~r5)I/<̤ R-]zF:8Έ &8ϼqgi{"[-CN!YIP%w+ ,+啋b%YpR0{o]2l!eUsz]ŹTxQq*KE#ͩLCT56%㎧s"P`BWg!rԏbʜ{ 5øE0~$^T!$&%TۈUvW먅KPfU X]SF+Tꄥ%(N0ԭR^> 0k9E憀R sІ + bG@iԜ<#zҨ~K* BEsUi R%5Uj HuŘ*AX6bT?T!DꇰF %IpMUH%,hce {@n@IוK]P2=)^)uW+KP%U5_u%ϘƎ{:oRWRܕ N/Ya,(w|e1Ѭ8wɆ+j!naŢB Rrյ*}dŋ7 &c /+pL-/K{/bӚg>lmhxb X{~\Kq2Hkb8 k&Y>kBԹEꂕ +!BUZ+͑WFƲAQ3 dt7U50&6|6/B!ӤI g3Xvyl;i1D Of?LeUȔ잶U*q/ns昳>+Dd%>'ʛ4a(kk,F2K`ef4Fopnr'gsRYecl,FZHoy$/ =8=$eFɑ,/(y̬cz,q  wєCqkG"k=9#T 9DGy|ghu*#1yqEX͈}f5q(WFqB"A!'Ґz =1%*8;~;Kg61d嘠\a_GˁWOƎ6c-7ёpk!T<^)ל_Ȇ\湹s [>eNe'3粒>ĴQ]9ϵweI곉0_(N[9WL"rBWRX*zڃI,Rq#Rό RAm*P|a]2x*߬Eu;cQ ܽCݝvޛw}r3.|4Mgv&C?&;=WG̺(H>F^^V] $>| ZI^.zދ~kb~/"7IߓМ<\dDQ,|ї)dtsI&owRt%C =h N`2L&h^s &{voyhM?ð7W\:}M4i2~wO_ 6\gp.?zׯ^|4}ͳ _Ap?U|q׫y뫥>:}YKMoc~uor^ɏM@U-]u‰`h&ƾw9N74o|wϡ\W7i~}d<wgv#K2>YKL 6{#DXOmF~λє3˙ ӯ5Rr4Eso;s[Sܻ `B闳߳`FJb a,Gz~&`&ߍy ưK#X5\{~bӨ@v+m:tCH:}@ܞjlһy/xo7ȟ/h 瓻berF\/ݟC g> R՛g[.|0#I,D|q##bϞ7/GyE7@wT|y8x9NfcÏ~R\=f?@ށwxx319_~橀쎠]|wGx_&u7~ϗgM~lF;~*d0m+xQG7Ǣ6ŝ+f4ms~ƴ]>錺6);[IW3x)8ɉḭdebo>IQS ;p +1}/e'pHHQ<^Lg"8hs>F#6uGl$g 'y<}Dr>/'K!U̼˖U1qM6f@/r.+ܽb*%="8Oby/0R"iA0'%2&O-F"nfS/rт4Cbݓ2+! y8eAa)ʜvYD@mܳE83O2hdBg۷o _ŮVJxAQEYL &]!w<1zTji<\U`8!,PP%[/xDJ)QlbLF9:F`#(ƜZenHu~S:J bpHh]T2"-?)ZzPS)Cn76ީ"10^,#҉$IiJ 95X32IZhT)-kjQA(B4PmIG̊~dGʃ-&XI bj VYÔB,\m!KsGaj`\0ڇr )Azu9J=hOuL/3OK,b^cXzS>DDp2'r?7/荳ɔ/ `E)c`~Q,"&?CiLQa3f0VN(Sx?zF ŗ;wU`l/o: 6EpN `Z)724L`0TTϼ8ʄWXוjaό &\ &(p02H2L0^8dp0Ox~ო# ޡiQ ̬%F`Ձ2lQJn<Dk40Oǜ&Zgeg\[~ TkOL0jp )m(()xj؟gJ@AM %@|NWD!ƆtFM+ .T.]ˡ4۔& xpƩҀ %X"p{2QUpzgB4?fm S2/H̒UYhSXjƁm:c$0\^)eQܛ\ @@U5@ R=V2Yfy薫RL#6,܅Pbi "`^e#FsE(p31%٤\VIs(Pη(k(yG %{pn=+#6x_M`XzrE`#B J:C2՛\4F2^nB8}dWk<;<ST$#:U"SBӄ]%H9 vĈĀPb,"KcT 7DߩM}J-ɉO W,3U:-œ#bj.ŐG5 ?(]Q\ QƬyơG%~4Ƭ@xΩ/ŏO'qտgIѦ2ۺWYʾ"/HK9ʎc ZG, 8ޟT߈SF!7},BeB^!M^NV ےwd/L9 u'B?Ni=d}o}ˤn$vx4cIA/,vwc M^60}K9%);jFùph l]=UWWUWWRI&1̀:݋m:+ʝ񶾕>o1VH8PVlaeDs]M*l9)xλwFi /s^?X"t{ƍ.0WH(D~% mi9@NW6SeM@A$~%/$ͥˁqQJ`rފ{چ  wgߟ G=x?oU͢T6zSBxk]M BJtBJj]qIɥ<8'SO>8L3@r,[5(ӜH BUaم-Z9;djC$nc$ (2ȃeD,@y_i0@{qeJzg F5YF֒&5OT2*)ny燋'Zɓޮ͹EqDKV8 v07s4 pqKsQϭoX9d9gQ mK(&Ӝ7NnmsGJh:Ƃ G3΁tJ-߀^ń6}M*,TI4uk.ߔHW\Md.nJ1j`7|TRtkƒHZ^qR$7wgvGQ0T%TncydSՠ;\~W 4%"ˊx F^`U F^`U9Y/QXj멉p,sFS.C gL4C,*eK}Qm_~ "YW })$Q'vߦxEifOݡ ͧ9سyU`>}Q20X*aӇh% wu'ߝ|rpӳO,o|J4{K_޷aRO7;K#ʻK,{ :!7NQLE:(]ykbbU9XṡRv!"XV POOHrxX\"s+R{ö{Є}{JWOe~ +cb_8od< |6M1ZY:P_H&oVh 슏hTl,yՕ.Ϣ,1 Ưd͊(#A/6wol\`cwgy@VJi WyQȫi &[dqrIgsUV*GB4߈c,7yv5E l>Xv~!,A']n'!Q-cE7}@Q"L*`뚊l@.]l6z[Ӷ4}7(Ot\_${r%KWNRo-iF^p@ yH.$ DNud:}iـpC奄 ~6 :ӓW}،JUe4ǟuju}Dw kn*oQ9Űf6ąLjG2j-\)Fi&Zv@cyJiڹX2s5lQVazct! ޒqهwuG# |XuL[!l,M)X14.~w_4yxsiDl>!*8}#t[Xó|Kl1< EtavUVt(PʅãɈ^#LF4TDWdx`Mo`!Lzm8n~C'kѫzvEXE(X gLF殍^w|kapadM0LzzpZM6EWcz~Av,OdLG(ˇFK ȸ6&Ji2v9d˴J;k,NYUHD;[ڎ-EEw]ڇU/fSr_n vU_+v}}A1`4v!>>D?_==H! \%' >]UZګt׆.@ {r8Glc$4Vgt`m=68h[?x#cԖ;V䎁Z/y"y͏(ܨc0@yӚ5ezIx=g :}: /'s%e}Cnj9x2@NalءQfБRCŵWVXPP© : $2+xY. eh.gfb $`` ͉тX+5EA{kJ<kw6cZmG>R "ڱi~OݪLTV>2{: 471\4,A )I[Nib;BmT,mgfr3. M ;hn&hvns g$ {+2Q *攆:9wJs8PMyTf c)]sF+8V(^.VNhfV8#UAXfe(T$B6vkx A/'.Iɟ.8Sb;WF+*si[0 R \F#Qih@#%̝c`M^wֳk39.`Le..]iqo X!ޠVp^S}w>SFQ#T({|?JGTMJwSIT?ۑTA''nR>9z:avOQv"Ҏ4inVkQ"%U@t{*?LB~azv?^/J{Y,6ڰ4_\?.'7r˸ȓ⩗w>yCP!U&3utmRJ!l&q@.J56?7;:0"e([0 '=\݆ۧp}~{|~.nG! -͢Ҁ4uаAj;f+$$H >]X]’V'hkFvReYTS0 oj K})をW,Yi:dFŀp>44gyI{-6Yb:qCLA#%ӇIX?ы_YygT'/*lC'+eh%HvNc (aʿ+\nъhA{݅u"QIv`'7% +@9OM,DaThQiDV>_gA+ 9г ToavS| ~*Kt\s:3Oe$YM.%CIW+ h)#/+6@H> vQ4(#J;N)BtՈ$p^jw(!Cf.&,nƝjݱO= *%RٖWSuP4UM1L 4-땸<+uXR5"#'ޑnFT}DgM`Z~,E7䔵"(M冨$@UruzP(Ts9Z[ѽ(ӉJRSd\X [ BB +^e5x^K>y$Kd'w($ਏI$.bHJ:M4i_ ӥBp Y]XWWV[XEK }o1pvtI|?WeZ^z /i7-0&=oQ0hO >d/ ZО[N5M_ K9 $$zTNh_9)rׂ|WY- /a봺nG2 ScmbRrnm΀Z?Ԋhd4`?[6lnXJb̰j$$3[#1 sx˜Ic?cԀ:н!tr3_D3(Dy3+ϤD1][-oapU2a|) ݆UJ':X|l'0D$e/oRggDs `en;A+AJ!Z\1!/I%-JW䗈dzSDDh3L_ \$G+5tWa 'psכS(U߽ڌ?nt=%^u"D`Pwp1)&;?|]6@P)ƑliPFýߑ! + rI=u5ٷ ӹ̿nDBa"08HJ0-3"@Ti@oR"={YK1+&bpm_^썐~á1CM&w|ZM- )XϥR-E %Zr Knoq2 @F]oגVl4g5K :Z`*=lz1A EwmmJzi,x `$g&yY±e$'9Eٖeewղ HfW_XE]fGeT+2ܕE>cdp(`9ldl9>xuH)˱/k JX֒ɧHp\3'_M+r:s㐪:UMI Xр;2Cܖ-P;R{Ԙ[:,%-M67 R+VN:ʵ ao($];J܈w9Dd#}h u05*fK0)+ 5tk?S#Έ a9LRG&:ĕ1m;ux[{0ܲxsۚ5r)J*A: FxVC%6ăi iOzXLO<ٳщ5Z)k~ W|97gxY'J].А=SghjK42J'i5wbثmY~m\xmCo9ھ:iCږ `/|(+%":iwBխUBv·4N1lD#&`y~Op\Yi7xbN)eP)}R=̥j!#hPN~ C^f|>߽i<}){} &i ip B@g~?DWY?GVfͬw%PgAj!k|b&&[,{@>:}=,C'3{tRȕ\Z&l|#@vrgU `!i-0T% LL~ ^&R1c'hBdqѦrZB,Y4'̘po3.ko_W'}&t=X+Ty3׏d79 gHG「k.P„ pLhCm ;,%/A$ 'd%*I=V"0`<(/#ǿvc'L6LEM  k0rȨ8sncCOF8=edN^$6HI61DKMڲpo$ $p!O1lbPҚ <$ ](%{@ٹ^iFc{M%OH.0g<&f2DoiYPu@;#,Nn(Nʾqd6!X|‰3It#'nī6^]9t{$J=wp鼉~'zn^ Jo0d:ahxH=|{tx4=m~ P?_t}FPJ -I98?,`TZӟ6zcMH˄bϷ\ȍaw^|brsNhɏ_,z]U9{7A5PnZ-s997v36 Yr-DQj2Fgl*:_G.b:f] X,h8x4l-۝ :_GXN%pϑGya^zX@iwR(ڏk6DqZoo/f6r6D\US$6 k5,v1ۨ ܍ݧkm8w5d~]tmy6wH. $7}8OL7TM6 !QLL%YsiX"k9\~H4 K$V 4ș5›neP"E>X2FHշZjEg.Q*ɬZHXF~&yo^^Q(+UO^Y.0)Kl"g}P ENiL1[h6+5[%蝳iUtԵ 0$clD#9{F6?LP:iՈ5B"U7x/6moe`|CwaJ~1bݝm3!bnK|E.@Aݰ-wJZAh]9vsޡ Wӆ gDwX±Q 4^K<>|Q |^((Vgc]ZxhX}Vz5$K^~P { %y_]1\<^L MOϣxGrDgҋI+ꀋ.qgR>/6};߇o%hJq`Z|< _*P)MAO$>1& %dtQqeC@;U^c2-< ',P :fxVҲWsi ^~Οe- Fy:A-p(ijx8VQ+`m@zQ6vjJ7V4=1kr6EL4Ten(e&3gAe tT 8B샓yGqP1dKCƑKWk.A V}MaFf%Rɤz?<MtWC, iɷnq"cFʇGXP \cZX$ @6vK e/GAg)8Ԡlݚ6C] uMKcPN1۞V|P C3hO4uZ "l9,>9 +@mDL 3>i!ctgbOqFZǤΠ6id *J/1]=e Y>Rs9dǍمd6H 3Ӧ$#|b;kO#턖v&sb1$kRPj *8OT*cm-m4PN^-G0|4J̘24T@sx1u|,(Sp@N/&?/ƕ>cȚ_NNVeRUqXIr*ڑ+P6K&~Nӫf}]^?lv9} gGQ5R7MO7jvhx鷲,\V0[tԏF&Q' wzV^nj Ⱦh ֒Yw֩1"0-U3o͵unmvUu-Y5n6B!4Lݢiq&[8s{ɻ~\;p vfzEA(;WkJX CE}.<?x\"\ggOs%SAO&$z$5]^ҧjyQl ]ޛOs"uRR,܃]0OL|8oJjQe(m_Lʚ2qw]YoDz+2TK?Op8<%0zzuC:""Vs3 gPf`asy4g%ֺ:zBy!Nϲ)AJ7^+4o- `N[@f8G*0R4TEl-m R?vpf iQK]D<[j_p|zEiJ+dvIlKLyEpj FsI%{+AӖ8}&_tN5CiQ[ .UD-F#sE;Uh< eͩ3F2qIpnjP-D;~ drT4-Q%,$teh-2f<择eow(6W O0AX6bojps:aQcOaMD?}ٟNs|/gIԁ~|?it{|4.e6uȃȗ|/Ƈm ,f?'_X_rfK0i{_A: #o-*}zS 9v6^?̷a0Ic#xs{rq@)\~5p]-9o1:TJ&ZeJ<7@ê& ?~Єܽ@i2[-m4DL6 /I ATM)u!>be;IecvdžK q)n㲘r S+Ń^KЪ;3:>I=x 9 /Zb¥Yx$;  #O1)f?e39s0#^m05n(#s@ c3gc탴N3-!,bauhܭfXW#H!vZ`Tp]Lu=knCA8jq0H*BPKZ[,cdls'uT)V0VszY))/PƮGAٓͪkJ$j b/ PR:P(yR1)wJqgD]s:Hf Q&Pk  }(Ed d ZnH?lL#IFNn@,ET5 5_B׸^c=ۆ /^݇/(` ,{V{3Kj0{*iRvx$p8/4A:L51ˤ+F!aLW2}2QGwvД9-IWuk~|^餏 iXӈzǤN˄G` 1,L ctQ^8J=^S)nϪUAIG+mʜ-/ĩHF{@8#n*.: ȱ ja%c̻SQ蕉ʠ^S\*aNyKhjxkB瀗tk\ tKLJ Exw3.\өsSN77R,OX!u2MzQlgt㣅v$Q4&qZY&)8& 3=FJJ, b#DJ(E5hTOXC%9eDʱ2/E]5%iV6)o!0ڠ%Ļk0j+ 8k jfq|`<{|_g_oxm!r)Ʒ>bc;޾J kUc<{`%;z%Z$Z u\uP):$SX*M}KՐfYN5RxÓKi@ KSaAt4Tj-=Z S5ԆfD+4OO H;,j4,H0I "ޣ=dISL)B ,PyRB L !xüULQpL NJ-I2&9/#5Y2N&cNJ\ jQ34B--aS3–+"v*qq>"-$o .uvW]33֠ڮ {i@}s-|5Fxqgv> D6Ve.xG!MynUh pBr7Oo&6 ;?sԆc!uxv|& 7] VmGӽ3F4*:DLo޴e$dZf]X~%(-VfBӨ9A+{^nO \,x2vMc2;dLSî>s Lş_EZg9͐OQvB>y90c=]7mRf8yKDF DÌE ^"xxwQ<0+ a0Ic?PwsTJ&TJie+w'S9DǶ?0{fzk*j@-lM;_]h oމִ71 ADfU 1Fq>>:UHè^UPLQB"qð^m1iJrȜ6mʾa@IBV נm)b=@?{ȍc6Erps`f{ v6sm93LLΌQ2Rpls]r]! MW-)y'7jքުV ڏ9/e!y^{&}+D UY!K^3kkZ3TS xiXU`>" #D< k#1$+) RMeRYA+ZXyۗ5¾8m b1qWba 8& ` ++g"İ͡{GhWWLx(RVZcF 8Oo3,w>f1>kb]69f@ #+9twƈ4dDEZQcKQϞR?9WӲޓ~U":ꡄȡ6\rޭ\o;d"]-2!pٙObtFAm CN^2QBHj GCkK*<x$QL:X+&Lv躯׮sy<-^1y MP1T\vo!a먛dp95')Ʃ`>J~6W_wovgvVDa2fKr)AhIՆvTZH$186N &]@K$F:HvRhY9ǁ;0\iQ>c.%#( s-9;vXaV*""m0MPp#n tp,ck렖 z:+Sf4F^_2"Ja%;[s夌^d:H wJ#\ܩ6H#!ԞIH>yp;aPX >(|&J.\(YC3I4;!JȾ5T|\*k?e9c31"lfS?fM>SK|<[WL);h'8M*x M4 n7YDawokc?GTfIwnQ*bx#Vq`MXm\^yX ":dxq١C :hyV+ VIϱ%X S( duٿP %#}:Y(ʡq)*)ϵh;׼Lz̜/V_8fO3\:~{|  ,+1jiI87gJ1́UMA!eccܦVG k,˫Ҭ%o6%OP5_"WX>Ud\g=WX_!_ KzOȆgwv$4q(i$TjijZ(zӹZ-.{_<66Q${/pgmq ] Ԉ']D;D;D|װPo P YzCN`ӲW8z 9b(Ս8F``4k]CfHu)Ɩ $F*՚)@;CeX`E9Np'v>Zh ]S\nw8FmϤZV &e bgu6[R7z2ДD*3N!똎!q!F aF]4B I;3Ѱ_k&dsc<{N r|ofGXLJ\Dĥ'&[+.J`$)GȉtiF (̀8drbwȋeـFz8Ɔ& ;B+AޥlgȦܼaPfPHEorv. דҷ8(<O:.cTCyr9m%e LzzOxK9RB2 gb!&K#[2+}HR-o"PD\ O)䇿;l>\I#^|0”sѳɍͿ]X{c3D8|=uq|&}mj/N`E!> G䫞۫r|BwWmUb:Rb=I 1\2iOE}Eu0Fؾ$9-T%!زy:BUefy\ 1w:\՗71zLЏ󫋛뻋?ߖ>`YwΟ~[ ̻檈+6nZ=-}sVP ]1Cck~k a//֛qqgߩSOTopYoSja9zH_j\QC.GX~8 }Gt CmqUģvPErl| N0_9s=*wFYӇSd_P0%ǤU4V+kN425qM|9IYCeRl3$„Ɂ^`xU[ br(f^{AQH= ́Oړ*OjA@?ӯ0f`aĵVA]0Xqa=eClRsPM'Us.7{RяRcTTt3X?Ӭ|g8dE'$aޞ=!㷓οMF |fvxi vZc8L~9Cw,%OzwS?[)hHijXO]-z><0?>$B/2giµ,Gr(ⳟTd3v4BC͘bh+Ӗu/ukAi:xF)b0>Vɴn]?m1|5#Jmܜkus[Zy!7+ه-~˳N^rFOV\n1xHyq) :!zqUi" yoYKC&c/z6|wG*J&UJfP(UB|PEUDJ=LCcLrw"tS"Jv"R%M-C)T%[i#~LnՁ jR"#VSx^£Ӈ_dz:=uם#kSuWQcJV}Qa$ZȈto323EKvx %F6t .'[x6SoO^-<畀>?ネTa-)=D&֯{/@Tz'L2Q+_=j1nZT@+¨@LϾRgҴH \_L'V|v8#9Ƈ<'7\٧t`1OoEL[ Ye/7h׋I͵O><'eir^yYcq8UR$ci!2L` q.&JLqۛɇtiqe /z3YdmK_V7W~ӵkƃW N"EmhoE**R"^kєm} [p͒hgQ7 NNl,*K ? )SF%i&X\zER0 E\=HEfĬAk~K!ͣCVB2f KasvL9:'iOS)ꅋx>Z{#,9GKgrKEw.Qnw,D;Օ/Vw7Yw%6[9pRJKw4 ǕeƸvt'{sn|j!V*s)n i.UVU%n 'բ śtÿ|ۈQEM~ܡML- 3 FLi(4b0hkQl7ۀ#lFTPxJXYFxzʼn|b,U;Wǻ03gb܃ƽ]π9!t pľ-^nuy6Y-,uBSFxHM/%92mC̪yuV^`9/')Ι.1hwa%øJ 1yPٛc܆\")Oa鸲$ %5)*h0gpV4*'|_%'Z&*qPsh^G@NaW}?uk /T?x]oHW}ٽP!=`v ~|4mȒW<jRʢ,RMRt, H"٬Uuuuw=lE&uBкpPo+k7\?c5}ĒaȐ[O*Q@aŃ8Dh`q@C" @E!_JO(uxDЛwoW#Gy㡷޻OfEr;Ʃ_vCK TEo|OُʩqOfˆOlE罿YW[8ﻢl6N3S|"1~챐p`Jlg@$h1{of tf<\A}>}xboX>~o]}~{y:UA{y߻ǫ˥0ej}>]J. ~LE0U͗xYr_<<pf3*m h3- #֠ĺр0|֭Oex}$݌&?`EA>z2vϼt8yXNt/['\py0#LIb/?O l:7V@62_SA1*xL KXˀ]]{f<5VjuZNi:VOVm}\9|gqJr}9=>B%Go v:R+w%;Mkuk>u(@W|ۚD kĜ!*qҍ5y6Rϱujz<ɗamM'hQFzoeeyқG -Oޛ<2YAJc =WB]w7ʹUsA<45PENuc'Aln}BƒƟ^ڻB^54{{fڎp.6f(k_1  zҋ0_F?9taTR44Oi6.KBbIJ.qtF̂󱀆Y͠0 Xv2u,&)vPLaZh,laD&A==?6Rwi~qDbR;$C3:GZsgpq2&\3%I{C$el@;q\k#9bu@FXwAi$~80šۢȐ0DXMR cV#KlG ƙ>$":||kkDVRjjN)]#O.îҼ๽W1v bM~ni/_=!#Yk fRhXÚIHB&9]sHv3y zhzCD(aBcߍ#Z33O`q訽0ƻa1w(ֳXD 31Ĩgu^SNt_xgތhct>n!~?CD =9%UJeBbvϟ7wy)V{MFKQϝ?  8Q-8m1皎i)} E/ 4V3sv%%|zC5^4>FӜ WC7BЗ!Oj͕*x;JT"23(\َoj]z5~.0Cužu&h\{ +!/ua]3NrPHEk8<$7l;N(K.H3ɞ db 2W:пeV ??UR1Nc-Y}L ŃK xm%Ju^=3c¥6*Pq$bc`\r8~}c}_&ӎS9w&4Ulﹾf\re=ЯJ^.繗)`c /6I1 (Jb(Jc^ PM,4!"%'J5" ; ހ,0SX~}^105wc|L"suh].kٱ^m0r=eqw4 a!QI0P@IR c`1$]Gx{ئKI`!0`վO&"@T)|<M+7O;ed$&"چï`Y-*~ E^4ώ+4NY=\&Ʊ\].dg"olJ3,ֻ>ADxPl>r1Mf 0OhX2}!t?3we3V1{wÄҁ#p~vt+ :%?t}0`s5g5e=8jTܮ4YGU # ƞ>\\P-QEL#F:B.(noq 2Hgl D b T?#Tt~DFi:٫@%zv+Z %"xG!w|Ev=GI;-+,T,c5ך\f6tɹ>Eݫ RFq?ZX[! 1G(qA4JTѧ>ɨﭳi: T9j/]'k:ɨ`ORӝsݳoepg$U.e?2O^jbB5<8B)JE%mԠVrR2 W.Q7*,W 4v8j}I\pw넣jΚxOfdx3[m; ] 1;#D reN= =֜x<oQU]*`]75]z3%݁YΉ1Jy7-+ b2l{ fS:,c㌤9Ezr1Dud(n #˛~Fs~9Ki(E6^QΩg lGa^f9(:G<2LfqMQ!E.3?V_aqlX?8P{L =6aՎi:#gI}i5{G,"V@!xߋ1?>פ v|Ts C1KME%B+뤣 XnHJ3Db?-1a\\稄0dO߅=ܠ^fk ?l'% .X`Id, bZJo$9!ek ƖqL-V*A:q(&B**u{ M]yU}MoY3!UQ3؟X֖zCW[CDcqE8ѪxsF6f= eOa:c SP}*'LIf=WLqys*Öc,1p1%c)zQMIBw܉9*M$iH/TIOt :u*d$Zs)s729R&NJ>Ha$Bia-שѕ$DS14>Kf,FSB]\B#0"0)ajK-L3QXń$yB"}Qs2+s"88w K0wS f,u LkbQUВ3Xl8ٶXa"N,ʡTl" KjX,*}̼:K}HD "146MI7̃=W p!'Dca[Le%TC|..E, !v/a;}@o|v󧻐,9_f`0'2ote<͟}-Ă$ߑwo?fM+|6 O? 7}l9s?ç8!tLg/IwK~F|H65aJ-=V zLrF841 ֯0zTZ Jו+ GB?1ׅ1? 1#c=]XO%hEB\pEl`;līֻB1#DI@ysfR;K]T  GKݏACw,!}~2"QXyG!y=7 n᫟6* s簡Y㓷 MmM%Ƒ[qwh-z柆Ň{h懡yÓ6M+qʲ[WTCYuf(yiUT jV !-.[UXWOb1sJXKեIt~*U眦xFfsPy fsGU(XvtYκֵ-A>Ýrwx9 ǣ(pErЇE*?@(BF12& 5uU5;cFP2g=jX>ڿ+D]!X~)[p/Cmux2x46IM%O2_M32H f;XD2j% .O|^x>{/~e+(3dl&au{nsu74}4ʌ`~1ԚO*/4^-'KTBG&r0 ^,BĔJIR6KO*eBTkhv#!=k` v_ˠUᜍGG ֆ)7C1hu A(P@‡N&-m^G8W0wԂ|RR+4OOK-c04Dsa(iX*=(x&3{C(h #tc($_TfOK,\n[&pޡ}rF!g$f#'GCP.fvzI ;GGt:4z}($wz~TSF,ʨ$=-׼.(gX"oG:;*9wGNpoG$Q+Rȧy0E\<$eHFQI1]]kiU"^U*v Cg" t`$dB-`rsQ[1bhW/@%,ҲHQHo2wlQPSvq/"쬝}S3#ܥuفY)nec7}YVM)="TͩQؗM蕤T[UҷFDart:W ,#Jvw+f./f0ը3^/mB7xl!˯5EuX^i9US,3uS×AEb|e9W"6$uvz|(9F^YQRzN u腞Jh s}D[_!B8Qg4j*=*2^,%E,Wd |cB`gth ֕ Zz \^jQ] Z|n,i*seX =*|DkFQ3:LZaHA}p"~xns 3E?;+&1] 6 ^B-h#,8hF>ԠrHCLe,d؈A<,BrŔъx"% Mҁ1 C߱˶Cad'pN X1dR9,7i";GCw)!6 HÐx-Fv/Ra. ޳a,CX# Ma3-5#nx:%S>`25TikM4avo5y-P` _֎]a3N&4 ʬeĀ㔥$U`ap:%ORt{Z-iUJVIe3&c ƨBhm8V@ T@%7+7*A oJ :]lf}Y*B@UwP1|9P͙ } _  g_Z9QhDZ $V4#` L.5R&/ a;tTpHX7,YͷPkv_R6->&fh 9&\M;v גtwy0R(1y"ҸAbȈݒx'h6heDaqE@ln{^|ZE(jH3,Gly) B;3GmUvHKҲ V-NF-˝hEjV CseT(lXN4uY,FMĪ4]vȆh;˨0y4NX} sgzc\f!?견<Dze($:96XKb"Hmlz(8Z3]I (>#cjF1!-K?7u2/Sumӆpb0Jb3G\&qrƤVZxMH(!Th)L w2ƭLR z9}QEB7wSȄWP'wgXZ?mW{eȕB)5#P֬ j,rAxZ%67^'"6~nNCUS@kG'ދ @.֯+cZ C1BG)bKTb69$bt[W(oo\G߲ v-:,lt ,՚n͹o-H.fK%= 6WF+u 罍 !us'"BU+6 mE'xXn'iも0BH%eaVNVz}E@@% A`SY*$c ,Q4:( IuhkC̛jhhuNZԉ12KUZϕ6/'Ưޙ-wn*¤y Azxtf^8^K~lfr0L=^_JӉ{"bۦdz-%x اݣOf 83?3p&7"s6%lqJG.`6͇?zXcjr()YJ[tlYM'9tҒ*k<3TRA΋jqZ}WyRB*H^UPgLrYfi"w2p#\א תǓymP VF/Bҗ*3>9~ҳwE\yV"$l4jGMXVOʭk$M r1R'_+WޒuI]ތQXðOz]w{4V̞^סQh&tq[DX( A F Q6upLT{o1}S|T{}Ehݦq?11Oe,BƜFOp(`rnGjVlCت[tX2AЮsdr۟F{O-ێ7B˅o2v$o75dC7q7 W]mo9+E_pf. I&8N֖3YjI[RbݭV ,>UXuu7LPz2˰qqi~5imA`%]hOLKYRcGV.;$!/\Ddʰ'6mi7Ѝ̐jR rDt.pj5OM7*n}H 2L~zK > ߭)v;8v+-.E2%q2[M1 VA蔎EJ8Z_Mn}H  L f{K7Oj;*Qz, ^+=.*1_k'ttNWl0R;a(M@ S{ h/OKttՓ_adgTO`bbW_pgT=y 8z䄭,ɞA>;ujz JXMDp|<_퇾>yݯ[v7U|vuCDԗ "ځ+q 3@rG0@B7,rYQ"i5b^ۙfF4sr"̓!*/ ҢH?Td_ZrG٢J38~U5_ͪxۯ)O}j{SVrޘ"Қyw}z>bCs!w%~D5r(Gj :>AD|zNF_p%ڡ9{ɭs|MSiN_X]vggvgbe-F`pYdS]-py GE^M/AW/mRSiCIBX.c2.e>SNm=8:߶(y5W: <Zv>grPOu〘XhLCٺ†٪&6]ǹSD0[r2gi77jm Qxd$Ye)>Y ^q$TF@Dij050+N ^VZ',;.@h1Vg_:1Fk@Čqr8DAvqmE·-,9aGdU!C@J[)"(Z(iADO,~$[%4gSm@?j*ϬC%&NiML2G.T A9 tmo>N n^aW.\w8&inƸ({mNejz-: {G6#SaDkNPpHVo m&l+6pjߢ5O[19DK]? kn=vB6h_(}%љ &4l(ȡ gUb*y" }Il弽Vs' .~?6bΎ1VsTse%y7߆D~aqLF rrL.Ʉgng-ݗ};Jǝggq^򱲄KPO#+XqZ 9[n'QYe5FT`w g`VF5;3էP;87~ [?k (}GwXNeH~f#>~b1"~*" ܹuoE[Iek@F aOW~h6"F@Է|{fi/ ,?9[[H休=| Q%h4&'M0z\#2^;be!A^:].hi~Z.-}D|٢J`JYᲀM&悼dPM'ke@2{x9j5g}z (Zұ`+彉䅌Q@}BkƢ0V%]hQ(v Z6V\dP %KNw ^_B\y) F$^E?Z na-[)QޱbqŷmE%1@r:PL텑)vV9tB7'  %M@&M:XuA7| CMVL0Sd\3kH7Ro H U\`Qʀ%.ͣ "0.W.[07կ[e9o}^=}]W@o/x-d)o|~xn?Ӓ?ܽe?2sӏ>?KWw'~y>3^-6sY·Xkh#N7 hRvqt"ëo).~y|a~|›O>ɉl;̣!1 6A*$(<'s] gLROdF7)#ꁽ␩7T)\1T׻':vQ*Tj~>Z\5< mqLaЪ&z` S|~L֒p|"Mp PDdv~!st{vXu'Mr:9nK!XR)(6K/!ϔ1~! .WW7-:@F5\F@r۹m',PIAoڋg;mz5 h'i٥& Uݠj^ş|r =82iخ}|3vlk7ssk@kn t=p׬A[YF!n B_^,3q}F 23Y@K>H϶_%hX6RrFhM)`Qjk,ܑ_>gffG [kmrk_(~t 4cн)صЧzI8+IEkL:gV腉l\i0&g݌Y3UܓXZo>믬5cc^(`twN ${cQIFJ9@?'ʁo*}\0{{7_Y"'F;6'ǸGz&6c&dDw|&Wڐ2\m4C>jơ:`Sýc|,a'[惵#m,Hd"$79Vq>9q(wO )NZ&2Ϫo7Jd֥2;;x3Ʌf^Y?sU}u3ǘX#4.S,㨙$O/x8E\B>VSH^/Z98 hzs.X&!G IC:(X>t9ɓr-]h<X$PL`4 P PNJoh -sF%0#4X #ZSY;_r0a&8Uh9ɜpTǟ䮬.X0;(Y}|]h̲N`&ZqPlr6pn"ǾxFd\ӪjgVBc̷ղ9;{RW,Q1~p=|6Rj"ڙ JΌ$ErZXf;h VyY 6挼/zhvDz85̳ܢ֓sOjy#9'*SqOjCJO^nrȜ`R0 a&=A7HԀq"sey"`bWu]-VOMMU~x76Ɍ~r{Yo^6o@*,(ciä*fC ]e1I`L1 =}iLD)D8{=7 __}w{aQ3.ٛEA3$9(ۺIP\F2_ŘgwqU("(cR)Hu3:|3nAn9kvqA mem/k|ٴM`+8 Z*)T@wbVNX'r2IekҁCbʹ0sTANhOeV7Ɨl,!bP7Qd pU $I]p*i*u{]çLwrp_߾Zň|IS(q8nwLh@,)_AdPr*Q%@OǴN\AYv3]NNV`kFOTtu u;έ a`+F&2?Fw_n3Dŏ~oC%9sr8MZ(80l  ..ea g'۟d<Tz娢Uqe" f)$fUѮs+ℸ5!+\az?͚6~ne-ȶ тK&ͫ ؽ` C^vu2(ۆh\0Nj%:GBy+9 GBB)ds,jwf%zkmF/ޢ5dG&sv ~8F?ǾD=~EIZRb_-vWbYg<4 Hޟ3{*vfOߙB#z7GO<蟣FMqFHgcn^,)$:dB"%ԫ8@sac׃ޱxNټ`O8_>{pf#mǧ4eMɡ$CskgK<]HE5'{/bSgzi-e:BϮL7s6ji]K՘X9P H{[֭Uc> jXUϯZmەZu< 䨰7_W~&+IêAmUo=@5l6 PoߕQF+U՝ý_b0?k/AmBf bY藟bah'=O H4 "w#O.dPR鏣۶]>ݻPME#l=͈j2Vh63r#:l#"H,׭Gcw R^v;ޖzN)8- z[ 3ƍZڦRurAgdrc@XuGTSsƂ4])ЯGcGeʆzzHn{5)|VR]lOH(Dg #:C6v>8<ewS?K -?|w 9A>yԉ4 â=p1Eٝ}h/k|?i0yhڱ[P?7h_ iy񡐖n֋˭skAeU󂺽luj`Z@Xx ;pt빚V kse=¦X擊hB@Dy7,/h+׳6˚ݺ޴0ޝ!u=ݫ*]UгS\p-b˅+[?3/[V2)L(+APҩ3Li|_fO4"aq;N6{J]GF  oWSarS9CrS~GqOܗ9b~riSY僎W.99iMYk[ s}~IiV;|fwSe&j(ݜWee3FhcԠ`W)v YήőVSɁֈSqjT0|5VB3(9O^RDP, 7@#6Ҍ{dqW¶jvT|MwA'.aMvf7JJu0w]8Nhn j[&Tvm6G}(T 4  |"^F}Q+dTXLFFqQ};%=L6H\/9楗y7e_%S> dzqznz,U%YΤql@*rk !Ps$ؤnNch C$/j$M_/%Rr5Ȳ顃7A}"n+A?.aMN>h -~Rg} .h7KnPʁ~na?}=N?.Y,G`:ջ?]nwnmy@xzɃ1 AsRRH*2a-C)63kEA"I$(2(OԘ VNI8r83 ", T5D?MmaRab((-sDK4Ih Ze,cV1Id*%Sd^f2#QV .^8+t9"Oq Zye.+,96@ÄƩ"1XKK6m!p$NBQq,cc\!˄K$b2)υhq +,\)@QZ"$_FW6f6ZT{]_qШmfqƅ?_o[Vu%~Hfzz=-yZ'>q]KdTh>|+Xz+Hy| (ZaQ@_]m/?"a:dǧɄψ!،OA1(kM?fG[e:xZ%)jMj<3@% 9>D)R)R%d1\Z1MpN.RR[L3 MSŭ5 ?-d^JL9X{3b=h]ʌ13b|>bg*^|/eem$J@̖_8V j%* ZoLbL*ض ur %R[P%)<-.hYӛIp$Omh߀vu|IC_!G}jm֞Jɒ"K)s88" ]E4"* 1`Vr}p5nqeOhn_x1h;΀U2=W߉qHɋ-v>%^l_ծl:QX0enIr!qԵbN~ 9[[2ܗ=-_0H7kPbˎqc3@0z6 olCM.B JՂ=nP.F7?,L`}>h/(mpOK6V[ԝ/$⨒Q\hi r@bG: 1(TdCn,!TZg>_g ,,vQS|틿|;fb<^7BCs}^hPeϲm\(}<0]6-IUZtIzIL.{Cy9hg.d5aLKtB׎_̈́O#;u=p@g:ggX+e7˯] Q*5{jbp(yoJR)nb/wHi@ˎb&UuH /_m"B@oI[dO52ApE2ksu<ƄeAbOJ3]Nk!mhV  9}듬>l ؖ,av~{d$f*S&qY^v WiIN&Iv_r[v$y.I忟%۔LɤHX\d h~ tʮ4 X"_B7܂t {ou#v9З-l@-c WT4 W*-w~*?umo뙠RfMMefQ9{-$%Xg%؏DVag~űQf%7~ޏyMy:FPrO3EC`sJg,Qg=9D>Pऺ&;jG9H2kܧ Ir9ꓮК9Q`gT-fUqR;ʟFqMq؞4pkHHcX_v cRu% qUE F@Í@Qn1ϕiƼдO54iȮ[͈Rϲ.3^7!Gߜ"D=˞/SJ4H7үk9;l ҵr߰hFU**2]Ut_يߥ OZC^-^@ =|=Rk K:oz;)QFteM5K4PJ6DBb p Yȣ k6\h}3se :Z .yF|^g-bzk7p~ S} IQ77!Jkwt~T7[rH$ýf݂b9z5Y`zkrteW2c0b.Pڔ*o-r@41g?| dP@&Mlj?p؏Ɨ|fAۉNgvv7"pP 9;۬s0J~> FUQ>x: PS祎M>ϡ }=pSJw[ i\u}$IwG8_ [֧kQnYJT\Pf/G2}ssɳ?߿j<0y{WȘ`'ހ.J {ssSJBRT7V䖁+S8ErN((jR y> BTQm!-DQEڄ 8nm2]e>J(ֱ2arͿ4E@uˉg{Z4@SkFL>&C\inZ)NT`^j9+#^{k{8\nx6LkBP6^;OQ# `ZqE_4١bֽ|R'Uזҍ3SfglİOYT\ZSmrK|%&!|!:ꓤ)#73cŠTJv~@2] [J5p6Bim9h [j.EM 5@TDž Ur҈,z69K> չ^D&)JE+KrWLMi(1W8[cp"`h,Viå} 4^!`\rD #Z΢{mL:DT(Z# ɍ!6k-o+!A-=i)dA(eT !2"8bAJ>e09 BCB 1KQ<#bi1h62XisHX1M <(OqIҨaP\D;.sp3>eib`6*FZah0IWԋmc"`*eKߠfs̆s6RmQn?"!\) T`+߁%*~RV0ӱJeӌq9%k:%'TrFSvZRZ(-x2ŰX05fiIފA9Zb@XR?m%*[{! .bgɶUЂq"l b*5J׊BHz YH 5{pE `pzdzB(C!ZR%z2}bЬURUF=H0'J1 H"T)%o*gV *AH3qi.gKGMart{  տ:)RgAHUJ9C-nΌ5ѺJ$LBǷݣ[1:+BL҈;.P5S'k[_;'q ]ҖUڄ(dn.JMg?뛏UmN1OvKh@SJŨzK|ӯƅr|][z쩁Oyå#k :$ ̹򎪮 K]dUP;m5Iή~LMqno x dF61eq cfrJ3EF9<<}D$Tل%|=EW]Zn؊\ېDQwNIFh Jˬ,OQ5c!pM6,UV5,SP~K=zBӔWT9̐kӕlpsU&@ٱLx*ZX;ŁCfY_.k!OlnnToj!7?ȏt6 |֛jynG L33nTUђVlGnsc^7+,ܚ\EMrȨ׆unѡ ~<33!*JlOxbQrTJ EP&f ٻ8r#Kp#Ea 1r"Kƌdg~IӚӊXz4* ?Q2Lԋ_Hy@ Ρs^}CQQ%J + bpQ12qd:#m`Kwl}XR)- !z#29'C$LpƂkb`1A4;W{iP%jAYk[$*hcR# As-<$J,jBX[RI1aj>Z`u6yqXK {*-7n 3Lj-£>xʯЂX`Nꭩ['+l.حS3mdnw\' i !^ܬ3;t-4#ӌ,I",cOK-u- /E"&[YBƭH\6]5hk -˛MZ׷7_no v윖JP }BYDwg4yf vo3]3>'r>ܫ1%jޒkE\[ ŵ7O-`[pi Cվ?ѿpNpzk)r 9WZOgJ7wt)o):;wW>>|&&b,{Cy^lDR:N-}4`r"1딇c:ko OcOP߁ޜݕoN!|v]IOqtz}!.I e߈>\5qL =O R1Zlf|lOdcspg0[f*^*`L?&0|D9"VX1?Ok'PvB >σOǟh83==؜*3'NlN\œй;fw `!]:}\؍1Z25pBPP7[8y{θ`iK,ו C\Hl*H4E31%hFZEJA[`M b@0%Hb#s r{xqy~Bbz[wr!JIbFIm߫>sh5 ,ɓXchiTLp$`Q9bP"VUFc?ZVOɽa;ŚOu҆L@Z+YEV$MLWV&ULi(,3I:U1D ;Xw$p1U40zYv&X+h{[u,sK>YH=nWZ+-9CFv_ȋP}x1\MO_^Z1f b42WrPqH돣h{ e0ѵ'uq{w5: O7/oJ_!vc Zq ܸPrk}n!`/d'gǘR\q>]xow{8w l&c5*55k^}{ONHxEJ)Mrr @ak>xxWc"bOLr܂9m{#z{N50·Cm,XuuG-q-x""VB,w@xx[ ?cg=ZvkSTWvϑ`a[ղ0džJSA./Kfղ:,0Z@9Q1!<i#xJ֭8Iy^,^I+@?Z{t' Q^NLbȂGdT46WK2ğِ]h"J陶kEZIi*T"{e-i7A]FæZ_W7¯.yv+ڎO9vvrvcLiO> r*lRl}~:6t#IyyBf.[Pς8:&Aw-+zEU[ZީXAg~kC6xm85$C˺-]Frch/q'W~~9KgUXl{UyqE_Nz5q݊AͧكRT*LWjqtbؘ|qy~͗1K7pyaXis4-Ն~^Xzł5ZH -FK@?1[57Sߺiq sE\ %C|7@q~2-:`32*ep|G=Rݺwj|y`t<`[}C>l_x"J9pa|R¹id +.D+ 'xS:b|F]>y ꬘_,Pc p5mhB09/sn`dOuy>Ks U-hk,r*d%2b5 {?-K.>x9=%\D@[9Xh:ۢŬmRmQLp Szl%o84y^8y~Όm <,xJknE%X!SI@U0&)i\%f"Ze Ҍ eZHPdSZ %|di*Ǜ"̞Joo@점, h}[Lc"$:@W +YDZVndK$P*e4UdO&Q;E#/#R 6TlbEYTKZvpOvo( ')Qb Ʌ"*x )XcHPBQ܋*BB\RC+]?x^h9 sPIyńa, W)聖ds2ErO߾9h6իu'Y!;qdNRy$\Ͱ$}yr[{%!-QBqf-LZw;KL'mY~yS1 BS;z m<}WB~WBҺ2ކZ0>U6Wt,WBY4)1<F*1܊Vi9/v*1<%ӆ-1܂[tVLMֲnK Ө(s=bb Ow9DA 9gug D„J. y}}BPqMoZVgWXM68H"^xoo67zgWC[m=3/M`l3]7}_bNʼ \K%\.HAK sqh n}pCP/$*XXzQY@^m]^Lo^6pI`T*40%TŴX)W0!}#UA$ISy=Vi|n%ؘa4YdQ~Y_茉D&=8*HIZIi8`3h͎Fz+f +3 _CG3戴3w()me~Z-D6+ԭXz(zxb qt`+ l&c(W~ SzNkq: el-EἸ5[Jލ/gWJ'=Yh9`[~:Gqk}LL8-Q(s.zۖY-h\<+v%>oЂrJ2U0җJM1DTC6h:dص\gV8Ĝ1ZM\ۥآPZ Ƙ1q=#tʍYbk, !>d1?ACZz{PM&09a[>[Wvev+zn=1 o-؃ ؅ӓ3bZp4*C cyY(@)vJx0ûwEx#K&pm䬡mqݢ#[G5sWYWei)_G:V> dŐR;vif fƠҋ7xn l1\k3:pW֋CdOk>h:~mٵF`~*N'u-Ty5{>ZI2pNu?Pދxlbb qZܩP~>Y\NtJ:bĊB7IH(Y"_n/kbuub+V^7ovYI~ڣEb1c ޯKo0?[eVgɑіW/>qAAfޞ W׷apA?݆7{fp 8Zۨ7d0(bu3SOGs6Gm\ ef9>f]npng#JH+xM$;^2U pOzjN1huܰɴ[B6|"Gv)L˅F|_+Ab}VL=eOp%km8KN =|p8%WKV!)T)iDq+Z-^ITU]]5ESդF=T'Q%F}wf#,h,nWZnI!FS5\P/uu"Z4 G7tw4wG}:rDl0vwr$z 4=sg?*-`JѦX`~vʠ#Yn" zj4 ˩ sL-vSv ĉr@<.~{hP]{C_m|{o1 iq, zRcA 6[[sS,r]DT%sJ`D9yq*؋{{4KVy*nN?!tM& gBŝ"@ 7}pyd!##nb6M])z8{-F_D 4*BcUF6]|pi޹hK(xZé10 TRpl%Y0ޟ;^^wmPDuTǪ?p Th+*T pB~⇀$f?ڄ~H-`ϗ7 ӷ U ^ e ^ e5i- 0 *HcԒ;đ :k [,qoDe2RkH8gߧ|Wڛ3"U4/g]_t6|1Ķ?wUئ_zp1%FLk>䉣^9 ކ)I#4G.BYL[2$gNG^ߋ-*tU8#/ЂnOf')Y.O60'k-#AHb6m9-9J im[XiI6V"!.2R T0JB|&<64Qqq#+ΐx/5^O+]P} `]NQ$̈́&lSotΣbtBgh@a00$"yXƼ " zd-T eW0yGt~BsN?],Wpss^(캘x٧?`Ŝ z(t21כ=9⊀τv{a- ަNfwL}Rv Fs! Gq|ޝtYE Ư=ӫT8aswO)P\r))ep5L\ܣ: Stb\&*η!M[JVyNxs'|:g7VJ@WU͙5^A^{+Oq;xY;;-oݥ p^bIF 2SEzcBA4kBy*0ILT$s{+0UrOeȷWyIVqu\[Ocwٟa{.,ydX"F k`ZK똿67|. nx)af].>2-#UljK+"Y`o`*mF*/vo`zWMk]H*= +g'7 7+ wj=yXasWn%5`otnbɪ$G"+XzY3'ϙCʚ$tEJ|Vo.3DCS~w9g X J鋐JbŜXQԼ3 h[G !OO, }DZM+yÖbX OoHGrD:p.Z^iKe?,_LŦx ez*lE3i-)PS19% MD`Y=M>Pjf1$ މGG 0 ETOm1җIѺja._y6ap{ dq%g{:| K߻ X%wWv1nkPi_wXK:twaYj: 8z'ýS{MLkԫEJz/O>Z.ĖV[=롔êuDc*1ܨyv8?tPKNz&WB)}2buxr\#SnpUǞ<邿Oci^4M{RTT6`yY1"!XIע?؛kW,r/. uN Ƙ)ܺ6HkzW).-ّؔ@,.wgH,XVHTa!j5{j_U9iߺwO=ձ# %pTKQ,ԼH?Z5 F#ʛqmYu0M苚& ^G" ߗ&P8* TѾ E*!}aXWIPE,zMG?`,YDP-AP|`{_E1洰1Y5WO-M'6 U-LeoI_Wj1OI04ִUt_/?) Z$|ysӷ0&A?O q]veoi- ng_c2BHBilk'eZʬ*FԟJogSڼ"u}ljJwUɹ_~+9n^W^ZgR`5gZ8ǖqK .x.Vx9^{md BF!0N"D q8%լH wA D e0,-d RRkɔ"XF knpѨ"`ߩ'Ǎz"Cˢaj`> 4N13I_s뻉RI6y HUY_\?{ƍ0/ٍfx =1$`bFH;"u[ ǑZdWW*X2nh+.OB@nxrIT#ZR)Zg{lW"37}f3&u_ALԣ֊r}Ch6# s3!QkBgƤ?j0; Qq7,WC{|F)v5CC\̒A/BX[t C~9؜Კ5t8j7]؟jދ؉;CH48#\6ɑ~oΏAPLk>N+/Ԍ/8~5rE\f?lH51Ԡ .^3GWnDn`ɐV K%h4On`8yn VPI(DUUCTEU)=e6dR2=WF{^>w> ܢs1n-^nh[-j\NXeqCL_faTȕ7ġ:",t M!v.Q*[չ%~z}="eĄĿ]{o{Yֹ)>V|GmEh|^ 72k7wf >5 Aq2g]o7 ലzk.!J@4ɸ4*hU oVss%T\l?~LMCdZnpywg-sڸLﭑ_\J&ΑmfFL86R@c{Z.}1]>6/sngǤB1 Ir$rpi`]:ϟ\,b%) 5DZ#N^Ci]CЋ23"Z:c΍*BN@ӭn21$%--6O4'0)'[%8 `O(hi T[:SZ~ &Z3NviLf\:2pɹ1ƈXqiřWQ S)swLl׵^[53_R]wqu!R!BԹb{_ʭdLRz)@N+"4ރ T1b( g+H88tPȋɦMmjөZ+Q|d%b*xV `FkM|~i,<NRQL=nOpMDiа-O#/R)/* w#`:L" 򔯷XRESk<9/jON}mϫ_}Seُ+V jg.I2%qcm&(uKŠQźoyZشuK/_Һ!!\DO)*-e{-k4w^j:$El\,tDTCo=ƓOǵ^9Vq>^|6 YL}ߴuW)/mz2#*y>a|%_dFC<'Y)HPYe{oU"ψZT2o6-k"PbTrAuPai%{c?bˌ:t"]ܗBw{Zf&*]; QWMUY nQ;]w0ZjKIV#J9SEODXMo:$=O([ p f^J)K^[S'ě]% Jag.3mup:iGI ZF>E]5R|>cz)9HF@q3v7@GEaɛ*jp㙖ϙPM8Zc(aS@ħGDO/nO։"4gw,B-SP1s4[5Kx 2ƨkh%pA;=DoW7KVw hx)+kjy݀Xu{Oai1V||%QQfȚ[%5'JUs=>E[%puW=>Z(,iIER=>B%U&m=?`IU;OlVNJ+* 3Y %H#g[hLxݱ3: Qi4O%G#J8>cQ R* Ǽ.aVP$稥HBݏ+1"~a⧟敆Szh60Oa8{3sndƒDD%ϩ;Ӭ ڬ\YE(Ut/Uy7NG=/W3+{'!vrrB5ygf*9s=B->CϹtK$ZZS*Zv!5{W[7z_P2'yM2 Ϋo˙+E|{&þ2SpȄ% `l#26*%"B3*cMndxp?Sœ_Vr̴ʐɜpkY&M.E@eNJNU#1cd"Vzw+F0+$Hf(VC%ɬ>3YfLZ @@;RRDKVVkuʤˠE13(Li=h5xvXiV Nq@+= N$(*4 iJ qN\@!-2bbiw2B$<bid~ML%IO,YT_%{0z} tp B34Sߌ0]cx!@~_1sŚ^CA8T` P@K$4:WOS_Kf5t[2g@/a [E@=F|^`Xc/x)AlY*ieg]cwep0HA#1%8e{Y"8IIpFf6K Ak$d\vQ2!b yw 7 S <Z%bƚr?oGdr#-d=ȭdxaykT5 8TɢHS"yKs静އtb𾛺0:p0kFb-$wE*OCh6=i%2 52V" E\$7eG`߃i;Aҗ턐q}Sy?`2D[ Covz^xA<86 πE<(8 R!k2Kp c$w{7[NFZ* (B5vL92p, :bka!oakTBK):1%V+ 9?I`zm@Ăj&@ 7$gt].tVbn@1siHAx40|4mTsA3).+ӧRb7U`?ӕFmԷ;SqL"s hʏ`wGb_a֟/bt, )#3>!=*@8/~o8<.ʚ Drp=(:AlIOGieMP6[+Hzz5_q+}՚xL5MIru j[\&rt0tdWE|%N}-{MV!:h"ASҦV%j8jǽ$a^8[CNRy^4oE9|JCLf;3}iz{fPmO/mh})jֺCwβbfr!OgkeӦsN;[BKŭC= 5h蚅) ?ߊU0L4ө&8Tmp(=Pސ_ۆxk(Jg|tjIE ;:.0 7#b)KpDyJDOhh0}F{38%x&fVl$MKAFw\B75@hdOf'.> 9tgbkVH{լwl^~ja#>e0H}ѧ : h-}n:a|%NFj:YC"@55?}LN'E@d9$%E;/=V{lJn՚&7W?7_}$niM!. .>{?=I7Ou[󢏛?Nޝ:u}Cs^.d6^92ӏϾBvV⧀\4+?=?%E3Ww*\ãr%2.g-|] <46fUQ3^QAhVlr!0ҲjweFUq<#$i,?yF&7`Kэfr6l%vdy.c[ßN)&/"g(ݧ؄86!ɜqX#WJ »KNY: D >CN/$PA4儘`?M뭻hu jG!!?* qowl  ri:WrW^My$wTFr.5vV1s?sP`h՝3ϦP푙bY!O4Ѱu4QZ4a>6Ko× K3JgN?!qwa2Y!yeժHul5][\E17 pi.?+!ҠtvXJM˭-vؽE"YO!Cl˴(-1(gӺsQ圹^q#/c\ñ~4D{%cmS- -(q)#(a ڙƱD "bB)ArBjI ' 0+ # IN#FU NL"3j-4$Q=%}3FfmRIem:=x_P S֤C6H_UtkI#׮]\H*g|WjOsn=s@Ҩ/Ě $i)E-A]4-/Z/ ZOM|?6R}"wmGdcs_ܻMru9I"ܩmi{ &ԫצ??PY 42۫YFz6Mnc&/DC;:2 Nv\ nu1}TY.[ VW&$w.dJ$ٝ.=`PX҃a J.y\VֳÙk1$wLziRml13d2jq7KLtQ:h^CMtH~+rG Ռ$΍ތ\kO\@vlFߏkG ܰzjDsmP{ŞZ󵺏88͍tNpK$`F>eW~{ w^gr|j)(q&ٔwFO}G҈,~Ak2=5b~B)>jsJ4e7e:vtk\# 4Ó< yw"pq|8tww_.JA(0DJ:Ĩ8# B{ / ;R}Ve2Us~*jBZZnH#@EKZͧ˦ED~Y\Cz`ECiӼ8=xEpS.؅` iY~]O.a9wdѮj>bOfO̤_om89+:ϣ훫h1N뱗LmW52 >,l"pge49m!]AOv{Jie8֑&ő@- LAT$dbh&,Vެ\3H 2)-DquI1T'RE OPPF.v@DjsR\0Ju 7z%hN)EHT'F9kp *IS *E$IMv2a#kPZ(B "f:dVE+KCF@#y"\ph7ඥ,,$(iˈ8R|M$ȸ4r8"LS 'Hy+Aތx P\:m6ddEr\FqE{,81D B8cT$12N"b$Jj4g%)Lf?,wey0sjj7_o|x~Y-[UاF۪փd99ٚ?< 9<cc k_ !\K]7g1!ԸU#=*I8 {(Rv:Nl;{Tw\اW@I|Ktݴ|-&h%j(ۙ%{cd[%{+JyXxV.p RqR4w4tB՚y=2a;kiMx:$5!ZĄRN/P{Mٵ*R5^ӳ@p0 Ȼ@$ho74z>QJ#N}[(_)zHa6YhS. KA*N@cfW/ /w̨K_v^ܝمR+n1ĪH䂴wdD}jo=O$D.O@)"ALZ]̓1 \.5$?s6 G 1ØY>4QHm{n[\9!fнC[B_S!!]o /{;Z\ON8<|=gpٛp/iDh)%J"F'@Kx;UMfV%}57K/lxZqyln/K'3*WH_W+x_8K`4\`*+UG@|%: la׽Oe` yCYg}X'o*(5"\R;2AaM YsJMj4IMlV2r*pZ(D1 %6Q0DBMa%)DYr#X3h`X;kGR&YёIA aA=#'mDxgӔI""t,$RNi%NbZ+-JIE$\iA4C HdhE+28΋IDp,x)L$ LޛVA+D-ĵo*& Sj JgyJd0!S&^pwvەaGzvkUW!WR!!w{N"(.mB<ٻm,WT~ٙ*?{RSLK\ -Eݛ)Y"eۢ;8 < )v=-ڲyXAeEkK ӥ7{ tk `=̵[S_m ifϣ <?ggLfY6@p]_#o׫"٧  [|f4k77.?`Oj9[Bvщ =k(-f[Rp)0'* n] 깟BJ jV;A =[|y:h 4zUBCgL)-o*=wͤ+~cTQV88 YKïڑUv +ncVQL"iyM}S ż*&_#f'FS^@ayw'5t&@+ͣV+y?ELr 쳉&0͢'v8 ``-S5Ӿ7rMP V7OFg? S>Ϝ mA4Nx|kr̃k@}f3r0vpd+0M6c[*kF\3pW6jԠKUc8}~Z l۬eqZ\mKWUT,nt).RKHpr YU&"jPTrfH!#~W*|Lcq0Z1@ > G#aHh`+g) r}jqjm63\엕X)kw U.,^\R"j+q^2iUo Cxr@ w;9CB$f\= a%I&G`@L[ 5 Ǜe.qU}ubng{$VE vTRAqKFI 5[s8i/T0{=/76={] 1Fdf4;%D#p- )w 4i*#21%*1XͣvL} R#F'BAI&T$IL\4;rPlB0`\YC0^t _/f.:wM+^QF)F '8"E`cA|2]QɉJb"bh" HJ8Ʃ5TXi *Il߂2[C8k.Y5F5t_ip!-mNV/%Q)〪~S*qEj]9фRpf6DSJ֚rItIC8 Zqw.JIB&ԓRs( Jf>zz_Rk@Jl ũewkșCG5zkf/"\Kma?=}Ri7[;0NӒPy>LH Ob^-U~lWgx)& ]]g.`h(*9ef[Q @Q@ p H=}4镋5ڲӻK(HAg<d>ë1R`Svz*?4/wz9/2Oxz*OhC⡂Iƃӷ!A++RH }pe8LǣsU#(/ [0ZoJlE0<)mv{j r/%clm=`.\<98°s,4O[wݯ;-cv XBQeڇ;ipnyjDJ)ٶfY-f'qNtY#p nnUVBKv5ɻ~?$p 2wr3]Ɛtv~"`D YvՄr ufͧcn R8!4X1 .]Ӛغ~Y9 :*24>5;|_!u;:Cرj8au* k[RY<8J&zS'0SU O@,Puª&"6 j!b!eʥ4I ;DF v竡NN"ā %G F QB0 ,˪w?NUQ[۷A{.,s^.U?djY/+%1y-hKӮJ4JoV߼j6~|z4OѺI ƭRR~_ԐFkF\p~QC-$Hzq c VdgbVKL4E7JԸGθ{qAq׻+?+/+5̆!t 3AFN=gt:~K^n!*5{>3Z'0[tc-()J2_w6׫[(SMXu/Of|β\9N<\(3R-\Ԯ+')\Kм7i B{ UrT kGC\ET좯M%Ѻ5GuQźrZr mw׺!g tÕ}o *ynE?Ia@Ʊ;7+̫{2ZG升I(gFDAf-of/EsH?fӏ7Hd *jQEua,eOrAhiy vђ̘ G.|v{ȉDP9[}\e[M#o4:+GH#h`#KԘJr{ J|ޚgcxe᧒v-=/:Y{ie͋m%uhy8{1=m  :H@'6tuT^+ )H:^$'@T΍Tt@Nubjލ'=:/ Qm(Ȥy`#A A=V6~:{2af~r 5#9x.M9>, Ǘ 9+h_6Q?x(P{v}佱z_{0zz0\%? ˞~c.T58}?,L ǧf_ լnn% ,1FVBi%bQ7]`Oۇg";W^yהcuTD%NT$5FEvUlHCc 5Oq >p;8$y4wO p߃ڼ8p*u"dtFRjW ACb w'B }_pD̛#wt tBDYCEA %Dx ~C˼h遂ѫZܙ [:EЧ9],^YɆ!OHlhว}gUlHz+62`e!\Nt8jry)W1.f+$/"/ܗ E2!=Kdp~{Xf}<$^r] Ҟd8Q#1c/'b9pdH3> CzڒK$k)V9y=ox̅LM4 ٽ7ݮxxן#LD=6ԮKFC]bx<Ϸ, c؃I#"3O\G<\JJI܇d4|KT5#%/f/=y*([I?4LŴ>ek Op_fI<_?YSà85O!ySP;.e6?Z8u(Es^AӪ{deM6\*ʣzS{W9vK'釁y1Ow??}[0)9g6FJܩ"WȽrrܻ\!\!pIc"L$JE,P,Q1q"(\CB |͇[vn:zǁL tдLeϳݞ=O]:fwmSj0{ ƍ\ ~+۟Q(vz2+3ƒշl+@Y<\vy]HDfݏm~=&d t+wYcޒ G+/K> YşAA w9/dcAKQF%u%B0i.n-޹E;[d"a"@!Dt"A C|.0M9v$2eDHMRm` ҄y_]4mEW?Qd8s>1YrJlfIvxaAwY-qp[OTLY(ehcT)TX$ BC4Q1]U 5R㦍VcJ;Ԙ\3SG w\խWo٧,P[O޺WM,և E{,?|knЃ 6S\pyFr ;;Pz:2G`є!abckaG ; =;s9囍׺NWUg5/Qd(Nd\it5W!A&<ձ6:F(CDB)n0{3Nh '`vC1_@)&jsF0p\8'` 1 PR,e&4cF' [0pH )2\0L%#WM1 II]A@ERHp7 Ck겵ɓ^`ig8[K )3*¨e"*T b'J1˹,R=/TRx8]+fӃ8}8?.pTB0W0k;o&觡28S@,0*D ~͕NNnM0pweqH021m#=ٞ +n(p\аIٞ MtST6bUD"3_jﮮ" ̪hHyv_8gJK*Cڻ,PT$%MNTN 2<#ߖ_*y#hRŢз~P{/ďiqwqU&Iw]݆gٜg\i&@l7JGj)%!|ly%nRf=G7\*Q81]^Jm͏I#'Ah}w%dGZ y&ˬ58azM.TF;i=!]~{'G!]ݔRِ"櫍0W1 byRpwU (BI2ZaZ[}|J!E`ΕMEΐ U"a^|voTNыɉƁ!'4@bth8%GKk6dy4}˱] _ۆmds`RH&wLxiD0'YIdH&Be}ll|gkoY(L(NG{>A$yrm Xy*O \\}Oe7;MVMΚUdW=#b@ U׻͟r;m#g#Binpz/?&>`WC 5 QTMrsk9ivFi 8w=|Ju{/K5/2+WN iu0if"/1A \|VqZF7g-kVw)h,R~{ծ27SkWẠ3kWCg'{VJ>ryĕv.; ͞cJOW}kfƶq]@bU?3"F>iBz>E/zy1$ȋi,g6azc LVe`:>./w6U Ƅ!{X ~׫@$T~M7Š#^#Y]gqu..1goWyf *l?swdFhwGe'۞mB<}siԸ| q\1CqbI?Nw@5lQa#k(fbbKݧF~%><>Ok+–8*{{ u\~hywyHo8|?Z8i@`LH?±zvp%Sj3d0eH߻p԰`S˚SF e@gEYȤ V$2,a!E'N-@ 9ÜZ=>Blߡeg 脜1va2pHh1XŅ2L^I@ޓ* !5d2e",%3|09[X^(Tz0ePAG+<OOt(u>DhV@m -re2BԘ@hTr ML!x̎44 kDi{.Pm4J 7я+feC)4VۜuS^3Zn"J{T2n$+5tA@F8a}WO?mM^Sv FE)e;ln̶V\fD>:"!dvή?yW!#Uvxd]߆$0#2A{SZHMLY8AjTB#%wՋiɅ+fe~Dw~c>ƚ7lo5{ܜ gto"}\9'1}6evo>="n߯nw/ms*]rzS~($<۸8d˝bh}w,bq`{7=PqMW8ht;1~cAw#*@W5NkdIs K|hto#o~oJna Oc5QGrLNl3:L#Çd-"Cдl$ǻR\'> iM<%12Wlu g?Ele4"ˀ_]Jvwt fLH_b'z;ͥN[^"84左.7Uۗ!X__~~jFC0Iτul&-3'"9L CndՈֈqM.FGlctOkb7*]}G{>X|D=*ddƞeG:e>ȭ O~e1lFbǢ4CxZ8DQPJOdTiā:-cӉ aw4_/s"+@{j.M2qA͜ݒg\/9>nMC-FIͻp_EOo!_rz9 n6 :TX3\AN7|f21C MdmKڛ8=ʌް mFPMwx΀IP?WMe l=bc%*Hҋ J+ $ CR%SILL^۳M TC(By99;tr<ݕ)ߡ}#im7s o`o4|.%-G͔%3Jޭ/EN وm`mv.q#A> Ӷ7:1wUYpWC3kXH֢="R%Ejs+=p-pʋ;knWk`N)a#WuўZlGH !lbu=\U6^G*)oݛ9Iӿ5:Or\.H)qĂ~gt~ftv9NK\~ybt|zdh̐u8VEp:#0Mɠ(ot:ۇA{Qv@(a\9-!mt*g<Qy-=W׷U?-s/<.ل[ Lc{cvUb~pO߯`Z4z=# M\`cSnTvn5 ;B-HKm0 OT;T+mkF6bR"t{dzƲTfh3y,mangUV|rGhIrV]m[jHCϩIRu_7Rcۃ$r=IN3F|"]N.ЌAxZ'Id+SܫdR+=pG=5co\2O yRt1'm[ڊv_ܭB^`%:-udVEu(`iH*3*`tm@dShSh3g'٠, He&ggtv6iSW9(f! sm1^7Rqq{r(lXbadFϑik*-!ؖ2>lvu@ >w8eŢ,Hh6;T?(2-Fᄋ].أ1ag4EȭГ]<4N0=\/WLt.&Z=}^W=\[Lb6JVZҍSw)4ڡ Mڱ[~Lv(4~ Md=z%LEN62, &C dB\f.'zC 2uR9# ,Wo0Z-SϩԮ٪mZ!0d+O ْL0ÓK$QU{_ǧSs+K<ϣx2}Tn홠F]4=wm_1anaI|?C6un(8C2*K$'upFh$;FDqf<J8>JT#Z֎Ku;d5~?U9 dĦZ1Td8SzM2XSF OCIoR66ʀ*T01WHb1*fjPK$ScU@XG==m᧤)w`Qj#h᧠8 ZT ?Up@JP<֦k \WA$-Ә$ROm%UCY qY~*f~o0ucepwLz C pݲ*uɥV0>&#N&=?m̍{ڢ7] 泥?ltt0mpWI:/gtt"Ρ'!6h>\WYHR(Ժ7w!^Wu}98A5Kg`Spj0hP[*U lT{#)q ֬"l +"؁XRn@m'JA6+\O"MaAS7|;&AiuQXJ,щ }PZ`N1LObC iyGn1ViY0<XFSw&ċRh]a =7yEn4}h8E>pD?'XNkfӟ5EM,7/*,LkE_E i>^3^tRɝ_J<hyd vln"U@`1b]+5ǘ#c8& mc`.IDc˒TQ2ٕܲ,\֓VL{,w@e4TPJ$Ω`L0VH}JGk$H9%9Q"Jqi8af!qrF*s}j4R/<$Lc'#LFu96TJrw*rpNR]p$M=S,8D*28)X(C )$Lb% C4e>0$gx_jFtLft͢]$\cbq3)K%FsN,ZjykW\v='Z"ђ7WW~k 4=19¬:vZCsgfeYf89ˬxWfe^%%v'8W8Bb:7dv. R5W,pI.n\{l'bB6p՞ʽQQFKC9fj_}_,|nHP 04Mƣx(Fx`Eh["' {-9׫Հ񆒹ᒜO2T_)%yW4EOUNGN272'b3ZApz"UmFxZ; &I {$CI--I[lG{mSBzicc@HyX,[%Xh~! 'f%0CU*="Be; &RrAL& I}pnK:|:PB $Hʂ\\T IY^L_p#&OyU' oM|R8=d%.ezpj4Rzꎶ ؈y" ;cSbNh˨D+Bb!/" rg}^jj_Q@Wz~06UxѵGHVs&Zbk±F5s G:^b8bŭ<8}50}[( n7Sba6^]L~Frie6/ VBaߎqLnI{}X8% ~&s ^@z __@*ԓ( T]+(U|Wl($[Tp J؁>爘Tj{ H7[ )5Ƽ*!N%)l q@>Xa(짭 7m8*d,FFTeme##"^AS棇̏a2uge3灗`NH0$ Ӱ?=2wM;{ka5eXG#kcU *Mţk"D؀Mas\/Q9N^ӫ3|äsk~"O.קΌ8p@ڥnvnɾ/~꿠]ݫ)p󗫛G nj+,mIw7vnM7]nl݅ίg/W)Av`~.~X@^dAM⮲6ՏIvix82ןz0yVK&+cGgƒ~é3qM'sځG;| (#ʮ]^z)}H74[[3] f7^}dsڅϯ$3A+ oݟ_y q#}yv佺~BX is¥Ξ~Zf~7teOxx?ʚ~iƚA) ?M4AAyͰ}5 k_~Gf"}~ O?@J"|10p,gwO7LMϤ8Y@M(1ޟGWzvYfg[3ۥ7Ovu71-q[pgk 01^/|:0 KDd,LlH-,jZma',HLV&%Ixm Fq(K`i%ML勺U u5Q'{"5!]3#>’b+D5_sSZۊxg9\K>v?L・3Be֣/KTTQ*qV[ !QsRSyƹJM(CX֢9.EX=o֩&r3HfJ_ЮQ\ے<(E,˯ ֧K|Jhha],bݑ] `UvUFK W*I ) lB*b)Bhp5iL1> SJK&qZCq[SeWZ,'"6.X/zn1#ҔyXmEƚ]ov҃@1ҾՒ #e726"r%IiB!.582%,4NjN8Sͩ 1I;FOպprrQ)REr%BJ1ҌjaӖR⤔0H0,8)ͨ&)}RG-ХSJ>o֩D?m)3if \ܤ?r'Մ "OXJY) ڒu*7IJ0@-Uג$i CKl*@3eD!U4\^}aU2\Fih#@e・.$XeVL Uܿ[%=>IK?I-JWufAP)E8f`A2)Ir=H RԨ~|F hf#@Bs{؛XTlYQVi5[Kj3JVhs>Nу+,V릌b=DG!j߻4)]ۮHٯw &XB6)Cϼܳ L2 /؂Xf9_ݭU>/P >_ȸEv&_-n?SK`/Q_G |%n?k~[hze{0-̉ǤEڐ;,d1hݎX5#llՒxl~`!$mB5 Z!0&fmc%x̤+aJܼ ېpK(|Gh@Uz#bTFA 6-zΝck o/_yFdMF9%Y+rAgol6Y%M"{Cr%<*9(*|>e >Sl\p,95A91&19X&2Շ|O^C?>t_ײg?1OśE9 i9>w$6[}FIb^χx[8}y /z#P _w/o); MߙG(X~RώR )o&z_]V  !Gcz\.[f: ]# jcI?PP@fA︻iP'eo#yjM\&eG"t*D0L44U x)To(vVB$ a #Ҥ}%اP_GM1M.ʠALmt3""~j2y݂A4n5toLwW,U ;{@4Q 0GU  .eNAFqn^Ƣdug <Ο4^(6ysc X 뤧T#"aKP؊x08taFf@c)6PRL頟&f!G0CNHЍLԷqL)w(jFF|e: œDGP[rW}B#* yn bvB Gc"6 !2Q- l#Ļ<$|A0s^n[k̕ Rj+ḁMCѬh@#6Vf֨cRwTOjFf.lpEiD@>3ƋFh(`(+Fx= iPmVFO6J|=6u%2 fK\zAXH[rpʃhIk-5+#\J^TtuG\ YP%5ltLYJR1`)-}ң3x%K[Rj,^7c&^+W!HK@ ws1\Si9̐HIuيúN$jl9Ք,;Rmėe"L/rN'ƻ4yGv9iDf}Ƴ,DC6iU"J6d-P2ѨH ;kY`Y,bӄI94-1JYQJt)Zae@%Y`.)ƻ@!23\ޑ5HR t2]PS4HAV0O*OR MJ Spu]J; F@m˶v{%M MQp!\8 x ,*\*ORސQkY@E,\` pD (ğ[!@莖/*Q14Ks8IE<_'G)aS$ڢƎM Yd~0E:,͂̒TeVi莚Zv]X6$(%gfiR]\> C u cY=4b9zD#JBy64.cBXIq j$ 20hpFV[c:kB"J!lTIR|eTvHQRPGY>S3V4톚I&z$%c ȣsҌ$H,s4I@-nKRL#BM4ME39)&C( ì4ٯaɈ[&'גeV6/ O/e=]ݝy9o~x󈠅uAZ v_aփrXL fV{ケp<_pqskTc2ly]\Sn:f&$5cl_J|kb'Nle%W*FjxӀbBuK;->adDxy|Gqr @mOSghe %. ҕﰸd 02~ ZmĴ՗jtg<. [v i1J+ |`V}.B1* >՚c:ʎ10˦,emXپc(ݜ\)JpR wF{9CwYono^,lzbcjծe)1w6v(6d 11%9mP;iPIբ ٽ|0b nEjcRgbQY,Yo_O5Wbs-ULe(~6{{O »Q]d<6L8$'ϒS; FïWP2 G}XPRx,B}^ X%7vջoIIfxB:-+uO_чtb =KrJ~ՖvĴXf{F#s1=*%^zٻ,nvJ>@zt5lç_^=f Ɩ!Vm~zb )qsBP)r}$=kDcֽe4N Z50׆@}}S,Le?}dz1[$Gg]v,\K]1+Ľ]W5 {5F~5+7Kk,N QV6=>:jaࠥ[KZ VzΏo|  /bKs| aY*T/RCc9SCc LMP@hV3–ouMsR:U>Qs"KYKxegiKUgY "A0ֹJ *I/V1oɂBls^ri\nb.I]Q#ch;6VԁG,>*F5$K3$t"1Kg2Tr_Maҫ[kjKX`@~/>,3 {w!VU(,6zÑՁ3.BoE-Oo_Nk5lyd+ku?Zlr52i%IY*U\5/x6-!92\-nu7 Ջ[oBŭk0Zj#QWĎnl򉽋m!I3% ϰ}cٻ,gjɑޫv8`|} "BѸdK&e&2ʮ\Q:O3eT_?&R볓iaX( A'T/i=\裰oUalla8۸ @w0#A4v`|ƥDdw35i^R]%s]O!~61`Gc/'nJ(wg~r^IsnݱSU`,0sNyp !5ieQ[^pÖԽk7/yFSLz&/חDr1mkUrݰ&%`g.FEXM^YU쩱MJ?{WƑleS[n~*/,K܉Y< PFwM %K`!Zɍ|z[/t#J\ N Vvmn08n m)/=uO6%1yW{AJ'#׻{_8a[,op9{Tj)bo 39؁K`2ӗs޴q{;=X 7bgˮ՘PC }qa0uoۊs~5`4͜ZƎ~x}d1uCЃyN4Y(_h1NcN=u< Y|_޼'V:*1 ʦ }S~5P>pgjp~*%!_?.Py}2ae?Ҕaz_‡?()v=eEm,3"##$[Rx"È5D)Vkh@DSk: :3%jrϴ[ ,Ѣ[Yyq8 [iJNk&UZG V~dUggVV>. 4~3en-KMV~d +yx-šK=\;.^VSUTШT]&ԉ&2Mi0Λ-v茌TA=GMRXg+_cQj]KF(ZR?K}a!Z"J`._V(}Q:K=вx'D*D &VcX}B5$CɗdmX"7,f@O d[սh$I'K &ozd&R6Wrɱq2}&c^8k̩F:rM(Η;U/tbhDꬓ}jTZ=xGR]/lxi Z`LM!y9atj.Wd+[ S}&#<< *O($=婫'&/ONHd6'iwhyiKX"$">AZ*$anzKĴε %LCV/}\70!ӺONgYA.Ksۙ< k]l&ׇ";n&x-8zd8Y2eھ2e}x^US2Q9 ?NV=IV?OžY|-Y<2S^~~6i|(d,65կQlf%0$v.tϴ,GVoT%YCm?_&~%?W-s-?'22x}1 3og"{&u(*11U7vcub޳K\Ip7l3eEg d튴RYK)AzMȽusPuf` ~t:"Ym/co}~A@?Hr:oD67ga-o:皝 $ZZG/9}˭3%GXf:9n m'{1#ذ*aqvJ<k}jf)K69ך~#ވδ߰m#`b0֊NJP+@N.L )J[f[7_bsB^)d=>dE#m`=-1{o8@k `83urÿxLOL'ubq<32=7dm`x9sGiKpCӈi \\xN{f y{t^zԸz֑3ɥﱲ|~Į܎Qhc.ͭA?/ݟ,9r#O|O dG 8zn[x2O0OOm'A ( ZN¾q]_of4`xY7ŝ,nrovOVeV~4$Ǐ'7'Cʮ5'3`.q=OvuIGW7&Be2&l|O{\%bz7[g\ٝ {_;Y &3W^"Lfܡ&I-Hc77QڔBè[Q%YuD{U23<ߍ/$~mcșO|Xlz6q#b6?bJVt'x: ƹn'MQgr[fyO'#iJ 9¾N \Ś ƯwyGby%x7IMH2ؚ<9ZG{ko4M&rBqU5MZ9ِZU Crj`ÈТE=F'KĪdxKtev>p0 pqPuE,h8AПGj3(CzmgCBcvt> HHN eM}F @jUQlLkR1Uk70m"uGIJGwjڔ].irgԾ L)Zk&%i"UcGC]w[K )2͋E:I X@j `b61,T|I%tF oCq ]CU@C۞G}㓠y&෰|$xM9 ־`й֮־j8;ޜ\[+b/3ء]x16y+ٞP0 (R+Joã@/b~D%x%43ɞPz(;ǻrV^ iW/6pk63wC~xY&:DvvKs& 1z|-<^ք@poM[xĖGvlOƓ_>~_[DqqL2Yj*lku Qgm @5ⷁ##<D<]Y,Р֦6tJSO'^xk&ʷCzs\W_w1~OӟTz[`ÀV'j@FZ ֖wu]]ct fM: \LL:W"hXOTXrVt2_v;F x}$I9ϵ?\?얣q_CP=d(kjTCy;^:yy6%筏^񖝆B\2㖕y<[6z1?Iu%- z{z͞hˠrv0G4[$+˶EƩGs[ƚK UHyo,8 Xٕ[1B&l!$!:NB"zROQE5YipaK H>\'X1Y5%T;Lr`tn6TJs6&H]bCD+8u*z B.0E*45fHT['|h?`G%,+~?)#!<]ӿNXxA|w~g͗{;^z#U-S-PZۗ?*3j$2 sw0͘ x~ك₷"pԣ1cs>ܐ-_:"h}3Ckr.Q1-b0ޅlcL Ȧ:{x3dL(Mxr8&h-Z WODրa eCst( Ic>'VVPO=xU-L) uJu9/Hm,D@^_(A 6"+FE`[eތn=< !fwՒ\G*IYD&3R[.)+ 0ynJ- ,\K[QKLWZYW$4=)@)ʩةvdƏ Q"ipY]%Nf}ҭs{3/ž*g$$,"raƌR{hsMh tJXF!}aHɗ0W##ô"(f Ղ T85.b֊(-P^P(=d/쐇/ԅef7"ցATZc_Z)H;y3dRd-*h֩fIӺ0=#x'[#D˷N@ځ6{MM֛!m\!E/#ujç i1E-_ ރ/ [ 0%rThV̭;ɊZ#N)[h%&wD}5z2L ZFj<=4r8R 7g]-?1(`crUjŲO .V$+hyNr'BYK+ݲ3iYrLfRiJ!"gN9+sewJ̠Qh,22 TFu=u?gU6Eo?͸vZ"siX}Uj Q?jjhX-oWw.з/6\"3?pgcufϳ3_[Jqe)*˲6KPXQT/*QxJȱ^wlFqQ-8NCH\AN(^.(B1Z HdQmgy,vj}K kxcؔ~ GnlO&ghp։[UK(1\,|umC7u%Go:(izK~} ϝ0g[w[m|6䓮5j6߉*9UݝZOpFExOݞ(g+, %>g%I_[F(Z (r+Dbգat ۨ6`> azsTHm`l|p[Ӄk0ՎҨtdog3>.ïH qwp*n@% d7W!<`o=қphYyca籌8?派x p]ĉA>v%B8ow:Tk } H5\m'F4D254Zߎ RmMGFr{fzrX_](s3 yMfߪɷ[Y]R-S6e-_~X{Ek=C[Կ"ohMlSvYab۵\}Gmqg۰=ܲYG3Oє9GOY7 [ BN1t)[:*(кu1C^MS=}F![ BN1QYt$|ub)31mʷwk+d7IV&y3ƂI{Mw=J(諸}mSEV BBTOK:qGJ YݑOm=u95Mk`_'4(<1~LUkv0jLzzĂc\3vBi09 2u-JJ'7cUJ;~]M8Eo|mx"UOi(fJ2Y2k8˭rLEC^<[H`ӧwǩgk?-N;'uܵg&&=<.X€HoW6_3|XOo6b+y ]]~ɮOkr2Y5law_qj@t%_F39./u>wBёa\,f1Z-wHNxQT*'DS= Tb;5^Wk~b45v:?Q^hW>sb6n=QQznDD <ۥ"ŵ, _ tATݮ|u8|יD7/'dw^R@N5׋Y~v>JCiLV"kC8>s^e]MP^ez Rū/ZT&U[ CO&!: p CU~&%+i[&Y=R(Pd6,H #vajoNttF+*ECxd:̀ 'ڗ#NmG7{؋, t_ ҵQ2캷雅AQ ֵVL Efo[Lkc<- |>x-DڲJ"7`*)w%jMlӞُ82~{l>y쬶.5Fs1]\hJ}nt-L=ۃKo.`@:3 VP*Cei(fa3.Ri`vwZ .{^ ^ ]mڀ ɃoX}]4=U7l(ZRJk^!vqrd}7m7 82Rcs3j̃|EX7NpuܮM{ÃՎm}N-#`xC5k 9G Gq?!8,tbyO#gzYHMh߶yR|L>D9V#N`"SqĞy @%8cLέsyn2iEXf:,+s;+8g ղ6uM0PwjI,EFc&*&uQf+#iq"G#$a9{bL=L#yܮgtSG?.=׻V~^gj홶,}ݫމ TJxEM ~d|yFC߷?'bXn4aU_NY魞Ji,X*[,ܸ*y M  jan6iu}M~(|x>3wIIۍN^*əHkYI2gɥ@P'Ȓ;<+=W2[2ʢ+$'۞PV1юj,UYrm'UP W<ʅ }򜷧jR}k$ߎ+j~jC(: $ l{QN[Ne#@r YʧhҜBoP@ BN1t~J]nɌZ.f+)s2/gݬiD=uKAI#ƺ0>6;nɌZ.f+;sJ)1; ݻ=Y G: @)2E!ۆ$hUc,ɷ/˺h dsS[ Q(֌wwh0Mm'~Z9*zs\EY8}!b"IM[L(vW넸aqj%Aj 쓎{O;VH9Rtow>m9XqʼGID Hϩ̻, :E[Q?t0,3L*Mۄ]SgHF!= -HGx= Srah &,Usmu=%tߕXel]ܕP5ZX_VXj@)FWmeѭ֤̋9BCeU'9)j-K&};"dU/2٬ ̲̊BZ2([`.s9iln6gB%U@)Z/E0yU^t$LS9y9\*fum1%XV€r"QԲ衐Q4b -'fX8VsFZڨ6G6IkCJ٫v*to8aj{n,YXc2Fk^%R#JOZ_:16Gk/prO Ux_X]g}{7vH&4YmEd/O UB-͓T0mG\1 'YdXXP:o2JyX7 ً|*?zց)Cks8ihTQkF_(()Xkfٟ|D_P!z?lC $ӐY[x !o_,ޅtuFҡwe& dEp[+$L&LiR_ݒ@Zҽ/l+_)bfkmA(E^-|T#nL"T{\Zɚ.pm!2Vk.`5 `-ڟ)Z~&`* OLQD6pOٚD\֟FGQΐI^GFQ"Z`s6p% Sc4~EE2G 3.H #ٛ3 ƃj{h5>"ÌO` !`fQZi5|%nÌS[08=1'pf]PKq8 h#Zu*'+Ìu#S/% 5NhxKT&K*k5l4?&/èBX iBvΰo; / y#o 7V`ƤsE!qL+Pʃ_GC9-[PT iA}BX#zNwqIaI Ӻ8PPzZV E@;X!#1 O4Sg2 3橲3 ) 0 f32DDӎEc0d{mg_ZhC4QĈiP*xk)L'ǁɕ{N[ŧTBhVIB0֧Tc/u($<<<)}":P9yus׌G N/lzq[T)~=g*[~v]iuy?VEq ^( Zni~]3r~yEϿ;k>Esvyg]43Topmb>Eu>nWёoNt9H;IttrhD;Z{~::fb%GkM!KK*>ϭzOQ/aQa⵶}˩.N-Olzw$;It谷QtZ; *URxqmGl8%{ߒD7/~$ԌtLi9~}D]I~?VSbDt>'Pٹ1׳ƲjX_ivSF>V_E)[_?ek%A\ԅ<~؁\}'I)K A}4B;uCU:!=`X;i~P;|1}P0ROˍ-PC%LyݬeƤU^VU3n9\J=+KVI_낡:)cE{Axx Z:wJ+FNp:qHfL1|ޑlnhG*=+_\.;ap.T'lJiΔM)Y*nLB" rS7&_RxiV&(Q6EKmP~ҋx1G-}O)ǾbivK*\gviKuqeuUSQ QWDy 5u@aqY^Yu:N eY꺬1CI@2ƀR <@P2e ԥ.rC܉[=-Ju&I-J7 \+({~qHO \͵43$7IA00K gsj'D} s0]lRHvb;sWlDfN c+XaAtb,iܟx~wݴQ$I[0tݬLHi)l80-0JofvChͧST[,3nfD:Rf4MR0bR[ͳ"/3IghdUQ $+tGsU{xv;iM"13e'f6Wlfkɩ~[/3vVغ!& M?mAr4ZK"j鵍?ORCrVCwU+5P-X YLKYTP+d;yI ʢztjS[E¸a3|ӽqfr @H0DTU)3JRXGf H  l.XmuIH|qZn{'Q#˸oG_YLGw:ΉD : ݶw3BO ^Ё/$Cz@23K^R#|rq7|Q79W-mǗ?n=/#EǷ-ҁutּU9#7 :WEg- $ GUȹgno E9eI,[7qH@;$l':%ԃUjDNsjP\; zJS^hkC7A8(MH#tZĦ@%3Ty٧\9)#Ձ-Hj`1dž9D$Y`@JX@#+] UVgPM(3ಬj$EfO0(y E**QjU;9 V&3UHUY\eiU 2^ dV ^Uչ$Vmx&Kɘ =46 4j*\i7ؓ&]L0K- ;@\+A nԍ& #ؓaާ,p9wIn{Lc0'3̸z$?ƽ]/{iӕH.i3R?ַE0jv)Vi K8 &xes xZ7_.Cnuu}V.rȕ*Q k&gVXiFB,7sdz?ke._`ct)ĝ՞mq oRar^TR뽯. Gkj!T30olT)^9bțS:1{d inDHJ[sCBt5GJ)a^+ov 0buU(ME 3Zqɩjua¸qv+vp~5жx\=,[U}Ɵm7lgۍ? fP(I /\ՕZ,+d2'TjY2+&9J+ )uYK(zVL?V,iݵ5vzq]ٽSKPE֯ߝf'l(ҎH p v;:C]'Qg\I E!(rS'PhdE1+ʚ2UҲVyiBralʬR==77' 4/LK\K$.K%k (u^KӅcŸs(r6A +jw#C:T:F#X$(N3,3Ū6S7.SШ6RҪJg 7J8\#`-<<A#R~QY9L2"7A(O̵\p[ͼ>ixqoK)f5N K]sae{.\?x+*ڞ ƭrU.8ٸv1.zC YxhD*"{SgG~{̨V)|kr^0a 5! kKZ4u1sEGv=}X z$Vq8XjBQ8Z/r+_wo3]5m Y5ը|!iK}8֗NAx/ɒy7GT^&I f;\d4s}ma|PQ#*wlgADE|hѻ-&xtV+ϰ,Y&%}˭v#: Q#N Y'uO~C(2 :Qpp:b:׿y0_87)gPu}"ag@ )9%alWqTdžc"RSZ sAুM@A(!#8eab@PM}pZnm?FMx=ntǣ?@lԑ 9ظ]@4cqU=UI5Q6b"/X_Yg=T{xR5b^W[UU, f'kix+}o'9SlW*R\ 3C`\9 'fsD| #{C19WKQ7-]4 ]Ys9+ \pC͙؈qG{z$@"3B>nsʮZp!;pm,ǔM` H#% PƐ;Psj2ȫkC q *¥&-T " ϯ\rBD(]4|1]z2pLޭ?Kq.L1P>R3iQHU>wۡyir: >^Zo65,}Kzݺ6ηBooSir;{C>¶QAwlN߅ۉy/sh@0e=u Sߛ3xݦ[ &鞖Aq['RpE,E y-)[?R[vAVV1)"ޗN.0nV6[u!oe8mǹg[ID %Qw,c*hs[r[ yTdqJP[ owVR3瀢DN9VXjD06!UJO5k .+9۔c;ϸlhv N._}V 63oi\o>/oO7">?x'Y0ؑf۾"Y6z.0IAc,iOiVdm7+:Sl؛>4 `y&R(.,3d/>Exdnf J㰠t@D|Uh=??rJ᮰QTeJK(_\y 4nH!qh- i>,8WҰJ0(U4eȲ]AI60?KR=ߎLRrσ͊%!@pׂc j~Gոu. m^. ̇vvlPGh]7D !:#; '?Ԕ&! ODb  eF\uNðb:{%'cMg a")q"s' !*9C(\PJ@)(T˱DAJ!!4TqHmT k;v݀ݽt6SB%*uT0 Ak߆}P{MrĠkIJ3S(Q -{jjrҁrZԨ12N3N)q$hH( րm߁&85aVGZQ`B_QF0Rp \hx;f bn}ʄ:DXRA\(Z)[FfXbE{(:  NGT~$a-`=P)P !2.:,<$s3 H{Tzh$œBv'%h0_F,Yw\y~ʢ^W0('9mYFY Ҫ"g!B11 ˵[1*yoAĮvܤ!"aKtכ~`iQ6e֥-7g({|Kp8wڗZRwܲ]UՌx.K-z6#ڦԢnP\jbO-7R-F]Pjтh AHsEϷ&ѩEG-8djP%h!uڝE*zCM,RMga1)+Ϛڨ[$kjQ@L/hCZ\s盵)Csyޘ+rtc `cB:*^6mAu(yJ|)̄KKH-8@I( /!/g$ÿΖZ Mb`J I^0yXYdf*".gUԈ}aU0S hHgݐ` :eQĺ] nC/#[ y-)Vsߺq ߭bSE%g~TYe"[ y<@p" 8;d1O8G&.YBrMBJ$WۑzG =^_MbVй\lo( $*ClOGOJ-Tp'XoP)&q8m⤓Z ?|VA˶oa3[O~P12_o8^[! x,XP,SdͲ 0 UorSou4:ynғ\k}IۧߦhC<V$ :0$I>wA֚?Lp~,?\|xHeQ$R" @2W|[;Q|iSB߂?J<`Ti)qF[(PDI &Ict/UHʎz'53Wm)c)E*[SRwWRy6\Lz+5'&Xz,E x,E AjNKƱ`K`)q,Mw#u8X9oKOI=ܕBٱY00\XqIRTjXz,w0j V6J8KDZRSRwF_9KVua眴qK݈,$,%,=%pWjh7_3Kua {OJ8KPı5N%oKciUf)q,E2=b?-pWjXz,=lhKfOKfK4 W*pUkhQ?@1ai&J:( J0jBjPTo7Ha  ǂX@j*!-u 2UjkZ{ dd41N:zű31*I_Q1jq`Or LCB;{0+mBw<rsEm^_"(4쵭ڳHeB(cHHG~9BõknR oejoվ>1唈_nrF7;<ݞҭ+CjD|0ɾ*rOo*J蹜Ov`)5OK9^@PYd+wUKXI{$޻~3,D{#3X=|.]FcEjX ߥn?q7Vy (d7f~xy9QM $oCƻG^&P4(W6d ͣ |ݨV^Ќ}+# ݨ^ЇȄ(Hmis`B׍w}@ϛ0H^$89(B5A95MA(ꅝS/rܕ !V&h[Qt:v+AM=nkRx>DUArd8-e;R-~>OYѢ\ `#Vҭd5#aAǸMMRhZqqj1jHIXoB$UOɺcxOZb ښ瀈\=fg @쒙?GECH^o]z7nGL)& U~qt8>fKu|N2Dr(P+j%5;2'1\O1 UDh/G>0K+kZ`ٕ4t0R\m߀/A Dy;0FRt!a>z3Zk.Hx?)Oq's8ueÙ:=gUi S}ۇ3рp fq ߯iC(f GC{ X~3#?_lXBx{{›=cݽ?zA  nzc5_eY0$%=Om!7Yj'аCK!XB<|Xޯja>xhރ% B5QuuimV >Jj\c)L뀱\K8RcڹHWR XaVJKDZl`)R#D;^7Ki_JlLjO#Tj(:[z,w / AXzZb;^#Kc)BJ"TjLp7ǿnbR̃-J;^5K c)IK $gb_3e1+ng>ź×nr1I Tַۂ@9}s94\טbQO5+T UhD }-DH]?U8anנ|%a/gP2N_8q޷Y8^? H4Q 5|Ye{_78Px$ GEкyw+B㢝u*b b: ֽuR`ߍƱhprq[& ~60US 2Vؽ/r0&YXn CXM9@dΠjPӄ-PDA8݉=!:$8S12)iq3kZ^Lո6ku"mrq-X" Nráa2c! G$D9FwAFUDKrC,Rl%~]@┠Z+BD E FX 92 d %!Qñb`O[ 6L2*_ޒjL(0o.z΃HAz#@ 5t.B^Z1nEBb nyq&Pll Q,(P6̕. 3Q *@mwdwEz {nV77$"~Fɽ=p~@<76,y=z7 ?n[@\5.ʫfBbW:&,-zY?0_kP3DY&'Ǝn@ 3lWy 6J_6MP&LfշNw숹7LJϊ@),seje>9r\JnVO`43^nvf9 w笗Nޓr6v>G004VE`zs]zi@w~GO@KϹԀ!)n  rrQG j!HYe k2vh2x,oz<0|̲Ϳ7uN7d 0RLVC(sOwb}AQF=㘣V6saqFy.XK*u{!Q A9 B=p(GLCl\(@:#hݹڀ暣>Ī`L^ b_̅ڿ-;tPA^A겫bCҤ;u/s~̋c ɑ>O?޳0vaTK-Pts(`߰7Q?/7H7jira7RV|_n}+neg3Lc0x/ U?n{?zNQLrס;xnWZOS¼H1v?$K{n`ހgDWVtѬo?u즬Ƈh1vc{Rb3tQEgwδ[}POyC֘bdvS,ңvЩF.8-jSՏ>nM7>D/SJ|cv%n8Ѝb4_.A ׏7_tnu ^'۶ ܯ H=~<|>88%CQVEsb)27"/gmaZy5V{Es՜2-7`݇~VO .R°읒̴Y\A:(E|Z$.\c~grVʎӃQ]6&]KjNZ_A\ [0%_A1' c0 & U(#/:$O)2O@o~MG.L:GABRrM2s[IVWdžF.q)O 5d ='D|#r'Wj*(!7³ b"!Wl1gLb38(T)˕qhIc,a ~1FQb4Q #ut1"]C];\!9r$q>I(#-b- IRm9QX$!aDFqc9PT(fvEJxm (̘*2$kiheDs?kYEB=.."62PRm66&DD(XG*G$1"t"ZE #q\˒͑1Z ^lM6imU ^Ȼ&\)Ŋ ds)\q!}|(q)eD*4DQUH62:Rx^Ф,ؐ >@f b5#! 8` Z校 â_98,p(Xs0+e@q- q3[FY%-iz; /.oR5xs"| Vz0}ua-(3 1L\NZ狻9FCHZ=2W,g'y){ JEuw)kVњ?x!CNj t>M>e|ɱ~Ax;޷ج_6f=U]S(o̳E؋- nS0oMK+kō;Pxب zPܸ=p%myÛ"aŁ 'S<%M s͐.G6wm,Sm,}seb4# ifYqcwN1AN,٠<ӄ̂|a.dHIQBO}sbO LՌ!68[beg–I9RJ1$R_GY=9m;I!Eg1~?ӓT.L DU`_.X ,‰pv?e9ӜIeΕgo}LUpS(u.:6:Zh%"/ЩAtgXj;VQ]oQE,Ha!*`>L*LK 1 y1jˣH'Rpf5H+Pl]f7 9Ȭ?P|]d>uwUpݻ:Sẛw SJP R'Hx1K̢Uѧ,Mtٗ ϊY'6 =Jh4w-L c2`cL( *+|[Ͼw`wEZfz׹ˏV}X>cR V4)pC6i[Їǒ݀'ܡ4i3~q+Ѫ1붶UZ9m{:)m)놵k2B,ik ׭hDiڏ}swHԂ4OvObnn6NEo^jZ^M RHlNՔiNƘ[kt2D0c-WNp5XXR,K37 S¨j. 0ث N\P3B*ic !826d4"jX\sYI < 8*lANĿ(ʕ-R5"̱[ǘR]>I[+x70s,AK*U؎޲R"14Ho.\ci8#{,$a lXndX\"v$m8]?~ at#]؞=7BcD1"ROu"ꚤK?Ny@Y1ڿOmyūfqn$Uz4_'G0;?2cGSϑk+ekKirTHY妲~~HyO {֞"t0#<ׁ? X:pMW0!/VHK^% VUݭ^rh1:1`%j!r+S!¬m;*h|~ 6ߠ >`9\YdOwbM/(+98WP_G,>9EeS 8tV H9b;1ū@=KR4FZK^c-PFZ[&&r.2cDb>F0"#赱=k5eCݳVn[h8|InY˥;,&PI/Z|eJR4f09(T)˕qhI <&R`B1 ( (:!$~G?ҽx<}$؊1[iVN%p c͗{SYUy+py;Dۦo^Тt>}1P'+L,2N`~3S 'K;wk\Y`%د+=WGߙ<{^7k(Ԅw JҔ0vH~vHJN҂sHߴPLilxįniBS-.21NC;3K^ʐHaYsh1s55UjЬPt$ Uj}Ɨ ,+,9ȿ{qڋ5agjW@: a4D9AA"TRI}p VCfBzUhjBw{O>ܻяXb='Ktfجojװ> (1Zf !NGk?`6|YT$ Tō|H»By @Ip)dM(rIaM 1@GF'T0 j[a2s5\PO.iJ๘ozsO. B<]cd`UAR )tdpύ1ⰽk{KDj"+bA6Q8E-춰 C\S] їBYHBOEQ-z$ N}E~6GO~h{{G^{=|4/+i7azdOwjTb"I頦칕׹vNY:cN+"_J6ٸ{rHd0)Lsvwm,yIzPܴ ]BdK-a:k18d1 ň2XGV1!ĝ&*Ueγܼgbg;_áSNf雪 Kt#[Wy!)'\\Ee"Vg-=DoB!kh!\E;lXԝS61aF#GРH(FCJd[jCEF~TcԓL`MlDdCB<%8昄pp+8Jʈ 2ڢ`#,*?P)5+d:A&ݯ7LVOKO5b $<$Y\>:NZ"V;Kz~ǸȱATxu5vHk8߿R"KߔiJQCnƇ+u#KI2cLWh&,V`pahDihtw;wXy{W9v Ţ*O-Qbj^fl<)JQSfNė5U4& Y$oGTk1'=勽:| qp^*-AbA={c '$Tr^ƌaJˡ4cF&4,bgpt +'Z?x›6f& 81) "+\eٽYb$IѬ"F~fg睍'qKΓ[;Ϣ LJA0\8 /Ӓ/}wkdO2I &|HQ,- " 醲FQC822@\łsHrqܦ0Ҟ+ߥFՎV `=Q1O Y)~ǂӴ!6Zce 8p"OE G6!Bs%Ȣ 0bc7YNmhq5b o'R18ķr^P弍ߤ)=(8VU6Gfۘ`6> _Ž è<.B`UBT ,k~}h\[>)Vt&(:z=]9zГOOF@B%o<ݾ3ҲE(%y5ޕ(i2"-zpR2ܒH63tnɨ)gh r$u!qxwvsqG$Xʙ]Z˥k)Fgju~n,i.  g=Bx==dWaDP}CWi.8t_qP'l-J<+vuͪL݌m2v&j3jJNmb(A\m [XѰd&Ubd>dblY!Rc$.T|$нQH? &뉟gz[O2Rv*v3_ g'@x†2[gb/ Ɲx2Bt]m%ko?wH :DJDoߧA#:)TMK}]h%QDQ]FUzªȭV"%MthhGj俶Ŧ٥9i~yIcgLJFnP=ąM0O=NhCZu#!D#Ip-!(UKd Ubӎwʛ$N_/'A+)i]n+քRjd(=?iabeZy4{d>ƟT(|UM@h_*\ʝ38%]EBI\K{[K#O><e:bLe K1'- VD8LrqLedc"a"B&%Q]\DNpgp7:2JTvvb>h, ' |L(#(8@:PuQQ0{t삮0}HJ|%鼘fVpq"M[/xS` t<~!<`spey#>ҁ?%A"/( /R<#I\L¸G0bIXCIwһ#^m8Aa`1:XD%ՈI#uLx Iȃ0 #qe!J|/4VSxa.4'bp h!BA %H@:V( ])B5~2kbDO"׮4d9aMEq~~ lb~t l\Xi9 l2܇ĉ΋6fIh-p2i_Nβ_5 sd h8t#@}e"kF0d ߝU@DV)d DQ5\sMV゚3~~g~?3>O+[fzv\{ uI -F yy;8+/|6=&*zOc0sXa0VQ4b66CP DDĵ$hcIz({SZHRe\, bO`v1Ҟ|'2U:Lh``! % Fi1U"Fn]tYoo)_DnK6Q޺3d/_?ꒈ#1NX4xE^78#Y(Ú'C^Ll+݌7{0%&4ɮ2LyZ & (SUp5<`Ph3L7PMRMsϕ{jǮM٪k js-qMu!sថM}GdY!;N]ÿpc)|G9.fif&l:M/nM~,mi |ŖY#qf&նS8ȚVZKVX!]hI+bhC|c,縧j##F 3 XbІ2cYIaQLAEqEQ9ؤ޾GYNW^+t|x`^mEAwԿlf}썺X^E 3p6۟:K_*o luDٺ%f2 5?Aq$wI\aPsjgn Tj j!f~t0sBWJY||"/;geǽ*;|u-q(L/_fɹ[v2z9*(cߚvsqG@nrZe<>W7 H+xc h9IKB{߲? =sKu+aa+墚e9zUT3 KQ]9@w?J+;ҿѕ׷~yMmnW*QC}p3Iz L"r(nJ.dCrH >7Z/nn}鮗 vtڧnNm+;-2IB~p$SecjNwn-MRٚvꐐ\D+Ta.c]FNl2͏^5pyYmo3Kުsc2R\/Yi9_(wyHo=>gq! S"wi3Ɲyx8՟owwÑvYd ǹClY }Y 1*8 p*RHǁTFDl#Fto5E^>kI |Y'['/2 O 'R NH^+y8?4\p`yUdk%D$EI(6:;$LA&:$k\2 )R }b "- bpy/~g=:?e8t;?VgOzbp*/88ٻmdW~w*C0],${b5T;C qDgҸ j G0K-K Xgu%̎,=4rR}Tbd,XguaXz,R0ժ,j16K%ıTB8#>gYj5?YR0f>[_,=`M0k39Vխ~=8KAıšNc: ciitdaT8r J΂BDZ<2rė刯Ke_ZI㑥REr%0곺 giժ>K9ƱZƫg)PKʱs&,cii5EG4KDYS!%f; iLY2`υ--"\0-fV)Z!O[q;Vs?W=[qTofM#ı8R˔{ȠkebKq;zOeө4*j;PW6Un1/.r3|Lt'-[o;X@e kΩ*E[>c C#͎QE/˴Da=qݽ*iް^yc|hlaʋ{*h'~vcTjQ9a2߅fǀ얃@p1rPd[GaSmA u(,#q|b$Bh,3"%mLަ6&%tmEFžP @ֺS1Lh3jf\hBpzsF$}d *rR2ꃡfzNsŷqPQ x.;@yDӋ?H._r-w53U* {zL Td"9/4xWJ8P޽fuj"Myq7spn+X@`q!3k`xɜB{D1w3Aęc'ݭJw(ҼPNd)(54 BZeRY2VHyBER6֛d[vT6x R&Gb7>E7h-AsKp9s=2+Zq(e}0-F2E|{a/oYgޭfPRܿH@ ! gp'JwYq?}_ߝ:ezLnnk|+彛L$Qz_IEz:@mIq gߗ:Y\ҿ+4}\ 'XB N%ƗjOFxi"#,*2roiNJAW1reD媊uhG^4S{yƨzI"4[PVN2lWξ>rbh0#*aRҼso˕}ō+;9EBN Jx,j=$T-!ϗ6}XteF͸A"C/h5 ډ*٧B `Ŀ'߻oɁBG⛻Y}w._91V=dgW?:Y|r-->bҋ;d/o)#ă +F~`j<\n&?>=^ N!V6?=:x Jk$P{z7QHzfSl%z)^7`MD|=GuXTSUO:UAb6t$? Ғ܇x 5KV{rE( {%OHxKӕۊ7St q3bc-n|{o~XBn5J?BխF >5/PkY./uh->K_ǽuĽ7W7!\-q?׸ eH+҇־x^LJC.3V /WoO6~>XنʗJ!~L[ٝ]9rMf5^j>^W1SH~~݈ 96m+TѢ[ݺmrT#fOV [_ b3D)gFnEF!)ZT.D+ɍO?*>55 tza5v"Zn׻N?;ykz<{7a=!^fK* rkK;>[ (7#`KV=Z&aTF]-pNҨSs%ڨFy)$h(7( ! !dJC2C!YH7seD ?yʏN/QOaYVke}tVuأ^9Iy&-:T$*V//(}2  jQ YyJc&yzU!mV=Pʵ|6ڤ$|H@?mhT|1RXwֽ|kjhFQºC/}p$7O{_uQR"Rj9$F` /Y+ nGEKǖ-ـ9$,(l7y¥6 ɑ4IFDmGKO6T))b*)doMfٸ R}`-Q3Љ:Kh;%A.I h5kl-pʌ! [zsEFD WN>J~~|v1 ܾPgn^ާB'Ydt!}8%gq>n(,rPgn"`Ckdѭ66C^9E;sJ+ [u~N$2 "&D2+)SDKd"vϩ}=Reꭽ #;U.>Յ C_q}:w *aVFYG ܴb_/6(I{7W݆_8ͮzTjޕDzyMޝ0R^}^~n.(=s2k6}!D/HPPW;[F-bsyc*7>*U͚ω_UJ5r\l\,})N҈n0[ kQcD |8[L1qI6N<=/>WB[sʸPfRsq2Li i_V;{#޽S8a4nۖ˯-6.Ƹhg wQb9d)B0;Wj^2FmxRPF5)% ^a|L=+R;h$]FA&zb½う9AZ):ՙP$43 D 02̔,Ͷm!0v`Ao4聡R3ORY&SGNL+F(Ŭu8RogD*xa-0uyU$*Br9Yod[! uVdRIN *+x!],gsd=B:t$:ȗ6 |O, iDT93tܨ"Be}`]^ n0p[8neRm9ZE$d-FVƀ`Qu4Q+~LB>  l!@n;z='sR⸱'zcO;rkqc{dOghP3ވ1U\oɻ!e݂ȩSo"E{Z{hTCmAsx.>"}0i#g;.g("s $Ga$&Cna݄|P^CThV=x'֣OunOFW["52NNP+鴯TmjaZJc1yВacfTwH* uEj*vѽl *<*I+Y뫊9ZFڵF 5 5+8uS;-,8J1gnSko!?pU3!ľ}3%(c,%e v?Cs5zP,g ?ATx>E܀5XL[ i3T1S_sLYΌWt{Z(k%ȁ,^5(/@%G(CʊxRg WinciQ\EZVػFndWK˼_ $`qvOd`fc-Ggv?nɦZV_${ b+EGXDK%x*}Sξu#>c$+䉊V-D BB0=$ K vڨƠ̶Ã҉EQu{#. Pùs>i!ꯆ_GdMn:笠GRCLW,WrcrZXUw<9RctD =˜7/U0$uC$0;i$p6bX|T՚b#>5ׅ!:+սTHb?0g}-FhdTl{Bt(35b^Kz5ݔ{LMA@CVfB; 6p . ,0Q E ){XMhFP4P˩:{BMsܻHksL\JKZl͚C82،L9Z U`XRie=4UDqی@7`FY7#CEX %+s8SEt"q<[-rUG02d#x涰m'-Vv%(B?JRh! N9%5!QA1-'*x77ړc \4% 5hO/@P-=Gޭp:Z$Iaa.4z&1 ny!8z$;zI 1'j, G=ӃR`#u9EOUQ40#> ̋?PϴwSπ6cFXs(%{I@X(!JK>5UQy| p?E*Om]LYtO%3"mZ'=<%DCj8JÝv2%D(nF'4K:~ɹIxSpvhoJ| uۇT@[vlC߀ީkC5{j }^W•O,Xmɟ4\ RC4zK^ q/"9f4##͵C' T,w#J@g$Qd(c$mSY(P sDZI\cˌ[C4Β]MUy;UkmD1 9HOw5]pe^aOÛOmTNe C_PïNPT`w!a.#\{jYD6/YDJWB9-"J;حma[KUOvdb8֒gd֒aލNXK!*1ڗ95)m 7ޜePO刈P 6K#* I3Fe6;K2 on×0N뤕mk#u,"̻+}h.Cgh!4JwWq!$zSEy B)tƘ֟(=8ѧR4~QΒgrBE$~([{X)` VMt *3l 8SVLV{#ZQ5egtɢ+d.f Ұjrh/{.on翥/9SapJQj8Zj/d*aHfhz0u<1Y6OYw?R;_q:W?Ks=ܸ;]33N%@N P|>~m' i TZ'^FIt2`U]J̣.oœoKS'S1imo.7ӲhˍइwmR'PFzi)iEB{Nf:t1}/HD0DÃ/ 1VJF:<|lzzpHwcS%H ٲ]鮎?Q)!\>|pV/!"$*M1FWo$F'tdx|oxBG͗49PzX!ۧr0jNT ٗ OSM^嗨sv;OPZk$7Tâ7mOq4Cv>lغ ~K̺o A3&u8 ]}kн=iY R(i@wCҊ2mSrsR h<";#윰8,\H=9{#Q0~Hfp΁Xs9Bm˾w2x&{wKAwߐ:]/!j}-&(":( Ki*cHIʶ(T s ( .i-ѝ IQw-)../ dݣfZub  Sg)co)¯z2gr1y5(=szap@h;ޥs/oxOjHHk< ߑw߮> Z+M+jofwЪ{gpFbog'%۾S"B5,=35֘1hbt>_|^)=ߛ"z}{ѧpulgV`u{[7 kyvX~YMZH7n}VTB!^y9y͈*cLS2 UbCqQ!}f' b2y4ł :>yt!98]}0ZIW<h"tR޲(]uWoYbo"&)]OaR<: =W|h07FfqJuD81ssl9NS"c"M(Q +5oRe6Rpaa_QD# \ ajD +ц}0'c-(тiNHNsiVl3R#X!CZzYHrZ0B %D%1B=m?unKN0ׄ ӷ`o4x;Sc-IX*DbL`hOnQF2~i+lJF3N–GD޲Ѝp쓘'FIz .BR7Cx& g.s% IBMu g\@V5SkԄ./?^WA,$R JIS<꺆r&'2c!t@э?XtKT!a2<1: (XϻB8 C=On{1aD}âsG)ג! c>%Vx_x. 1P!X ˊzYdvL(e #L iĶP!C󭓍O6U'=A 䌬R?.<ċJ/B$!H3Jo\K)V~Ugb6d0D<[u@0!B G)a[1TN:O H+/iq{'ݥ3v7Udb$V.fZo쩻UXYm/fat6X*-Ak㨄u,^zant@BGXS DnVݯNQHmWrh~fr2Xiv%&x6ޗ p:~|ew˗=iz!x{R_:]Ǜ!)"67%%Ok~A{Cxq>r>X:I O84"5 :gZ5Da'RRVL8"(GȃiA5VSwum<l}|ڷwbؽ39NnS`RN:h2c{RZhIW<Y hwbf=oV4k|VgY['Ize/A͸)v;y šy[kH+uV(Vu*Z%aZ mdʹ׎S(*KᶔٽaEXo<_%t@kC-8 k⟪:rF]oºVB4F.+?ʫ\0`;範2L*]&VBRQ$u%` Q)%id ]&$U+hP%З?hoqkPi+MÚo)J_bHBqG[n>J:@5R$"yr"GKJnqc h\0 ?Jv9Ћw'EOS"dVʝ&\5DsdQ=Ph)VMnC ]7b~)4iS:@]~&7}أ{~^v.gT~a}Tz|6~2|!n!iIŬ-9Ü=CO4(.FYX-t"|!鋴\vďx \n 9>1MZqoR*C΍ 9~R^10C_AeM(2Lv@IWGAƛhW-hd BK9` }3-gB7J,e>@`*J_(X\yKuN*iNPI:@/W+N%=}#KxlNew4Jh(ySNj& #7Cfm{gXYe(?|߷R2Rof^7h@lZr>$_a/v6c`R&md-^8,$(P` ;')OC0N7׋X5gL;ԗ@FAȒ$@ռCiڠY/7/ua9z$1dLLjEKFMx$ 1hkaH`SߘL+/@D20-)rZWScH1/UZ 7#7/6.ZkRs)F%Bv,Ù&LH.&2)OY8/뿋yi,y]0IR<24V C){>BY;A^J'=v}/kZuY$p )dm~#MUconwY]u=IdwL,Fm Z P| 4#~$`hXSNAv޳$ZyRٖzpI_E?xvh>^\oM\_KJljx Kt &<Aݎ+xb^,xM#I_榞k*Cbaf!(NJu,0㧡hAj";񀴻7׏={W C,D[ȏ ZТgG0 -}7btqY"t"_Pj{nKL0`o"^z;"XLNH[K]TG-u̡ _c(G »hIfȬޮ0 EyCp;2GFEC1DmōayɼA$D2\=,O<4~"4Č܃]M(%eVI̯^Y ?@G 5%ޏ1+Zٙ0N1B @&ri9mQQjQX_qm,k݀Lkq 8$q! ,jI>x Kx z9=IXկ||kiv9eC&7x)ӆqhgUm4>xՙRv*3SHC+gE!R''G<u L2Ky"7Dt.KǢC@靛?}j(.Z+wwF/?1+FQ*T:_Y#53G(RM㓧"Oqt=\j=M}_AWHmEPG*/ZxA"Y̔{|<m,T8+҃X"E i1 Ãu|]dm1q=B 1%EQceєi sȱ(V]6jlwshX2[=*XOVR{ <2zX 4vl͚gߖ_\\u1;R*y3p93,D !\ם:Bb@XEe8Ǵz= hhJX1X?Ĭ夐}]&3N 8hQ:>-`)奛ʿ l|\ EӁ5"rB^RZ_U};,!uS0ϫF*өބa/V;:p;A`Zs#E٨ct2}w,;0Yg!Cx(evMZmr={32F(03Bf! ΂˜ ?{OƑ_!iMS}j M$ΗѧL"eW5ESCqhI dOGWu5,l8]cj[ a|WD_G4f}PDRsBXzf͵-(a`m^A V#瞶ŀm[́aBPLTC_K'!+pֲjvѬT ,GPXK)u ubeor3~g7`I@ zEos4{1QQ]HY.ƕlQ|b^& d+x:O*2Wyk'"<E8k2\Bzɾy2ad+RxQ Ϛ j:q+%qRM e?/(Vj2rsֱ&HL{%7 v_/n w%:GM,,'㖓ɉ5pފ`趰Z?/FnD4!% W {qH8<2N ,N賆.}9Qr(%)@,B ~e4 *ԭ!o{ 2>9g|8ǥ;10KsFyG$.nB_i,(%rcOSIceVҊ3"Rm/vol=X]]}:z)F-FQq@R6跅2+?}yfoqvHt2p?IH(Q7h Ϧ5toPԍd)cA+[~ 0cR@j }c{3{8{ wKbxfxQD4xu>y-!|Q 81vgW4WmK*Ds#zxJܨch w{7ςc&d pdfHH8#> c a`ooH ^ _w]+߫qkmǹ8 _s;•ci_l aޖp}3rI*YH^13i#}Ɂ61!2pYZؑ8`[痛M>,IH3ٹ)e xIDI-b0rΣ%붤*܌!؄ blf\ȒtD iCi ʙ\`!ˌdjo`6`!II^x,Wq!fLo=osQ$}/xwd|dgmq^ip.Љƒb vރӊY(3DQoJIQ~9KڄeWr>0xIJ8S|wf>i݈~ rRp%I%Q\4zn\)$@dv֤2ԧ}ࣟw]vx˳!D=YmIU8apy3!UOYe'P3-KʳnV uTo/RLrqAPQ_\^qqΉ5($(DD^[̶h 1 b-'LU(Ө}ấvF_3^Z~mZ/k1zJZݱ7) _5 ՛)Oo V"!֎ RH*,!3Zju;@ B͋xpHų5n=5`Q2AemRtx)4h ڇAMeAl(R/At5N +Bh~shVn.-viat5Ϛ}тKPJu@ʩZ)㥽ש_ j?|8 $ Lt藔=sZo%[!lut~b'Z1-.1ZGb{KL7q{x`L(dv_XlWԝލFgD[SIaPޟT̙wDBpK)-kհǔ`iJ O+Ap||ЙfI_+-M[86 .Dy:wsHOV Nf)ƋYoA V2`Ri~ 3qTk8 :{$d11_9`~3ȁQZޙvcW(.@} 5c YC)e*n|^^MFL.R:X<Þ3%Ox M+Mlt~ۡ cl迃@iRO20~R~ hS \uCp՝~ģO8! 24V+) jTKqu;R*"l9-f^XJp8ꏗ U$HH o[VF%,-Of<6[b6=Gs\BP ~չas֢&[柷?J)s1׊, gk7%G*/V%'?W@Ȇ.i<h[ۯN]ksiWoq'W;̱|S!yLr%N1Hy1G) m|lk`ʭkx;<?ʇWw!1zod'HZԽ5rKſ;(#p}%%~Mx/r3e 6s6wOxw_Ͽ ?ӟ~0O-Nksۊz+afAٔ獶`cFp*1Hu_;ȝI}[6$cX&KJD;ebconWlS\ ׾(諟" w[;P^}%MP܁)R|%HJR܉+^I`}T}>[>V'CgkUf~hH [noF2qko'G2;7?j!4cOބ܁ڪ;C٤Sƣ΢Ds^}s|:rIh6HO1x\J^V~:,CC(OGMO- O b Y/S*2\Yt]36NvUa ww{V3ד x}^u axx!6sETt8FG@TY۹nmD7͓G FK=@\Iξ$S$IBo3EаL}l7ǂ`ƋkV[nNFKz0|;؄+ gyrs+.aJql)ĕZr+xn\Iώ+I`Hŕ8 7Pɳ3p [:[c Xh Amr s'\C rh>; xVc sO+v%UlP6Y57㜃wd28V ^ r|D> 7%[ItnoeC60)9- T P%sȨV\ې|,4Bg35얶,i"١v فɂA.> Q}H5ㄿ.0eZ*L3 "ymv6C]t*VhRnBGWEb6)4 N![1I4WF2 ` &oX_:\+R G=R ۼ\ qL2{Ոhz@?sQ[,z dre!Rd`H ;xiO5CA.1RVBy[9TӔ.50'@)т;D 0b#b(?$¼Dm3ʭy ׳ALPEIC% y&D9܋XR~eaKIrMkpx%;+%Ӛ[E7/[N8/H"])K`gIdR*m+B H\Hѩ8ܱP-FbJ_t NZmqI3ٱr~ᚿrm r$AVR  s4s"Z#WF+[98eOj˽5N;0- 0cH\e4s'@ Wm .rܚގp;dž%Kapöp[dϴ+$=n˙e͢8Tf,rf%aj>:pwy`c>@% ViF(v_kఙH0 a c^ Ɇ!WMZ{[>E.U|'чWxէO1oIQ|dG;9-]O"W-=j-PtrrZJLK'T3.[Kˎ$+ UW7_G}=g7tK% U"3oQQV-{> 0۠#eڮ2^>vD )zr9joL,Kܾr߰PjuN ,,ID͸Hj uv(.YQC⯇_RAǽD*B0}0kZ8vjZdc4!1Lw;i5BڬnxB.mk/J|76븝wrX%BFȊL1eWU[I v'&?_Yy"xs;amxHL+1D N %OKgڨ\\(NOɝ]Kzi4-fpZ\T0S tHEMb)1m$ y,& 92Nё9S.dmUpw~{=90ŕT!se/&yx{緷__,m;Tljӿ8Gxwqz ?2:5z`O$t&q椤!xeKZ1(K5LZIڞK=jAFFW<|'w~5G\I|1G;~bZxUl|f;&"m{h In~HXmK QPכԇ5%wT脃t N]_qB8p?bчkGLZ|}@)$];ce@IBfC+V%ƌs3@sij40,QG/=5` _[삔A(HV=@GNT$Ek4!RZ8OR_jH 3OP, DhA(JPD_NzߝLɬ仓էh/Ft6aR7bam@>2k&;6ʹk>uyFŽ2[BBپ&^{at;SFeR9rg[wg ]s>1m Vk7a|BΞ}u8wlScj;wԵGaURRu:Bϯ 3_Ͳ7`$uXʧ.ruy,Vh/贁\ACVQ5h$0Z5}~||ͷ#D$~pΟnqd̐-tS\.K>^ίo~~cr2miuaWJn;r2IN%*l-\nR\$`#R* Ǝ=ȑ8gwt}ջ#zu9dN~k5 {6 nN† O)Bd{kAXn:#P[i6v`sћQzzssEx)*nr 1Sqh;jCx~7Fsf6I~Mzxis) {9j8\/u>*y›ٻdޒ"|v>T.RMU.RQUTMNxSK.Z9Gvr|ER*9Dd.:ʂrJIn"(GĢsOaKJH6 ^$T"]b))D'I;A h58F%NHd0! TZ6)‚dՂWSj|;a>ݽʍowŲ)`tC{Җy!ռ/&]wj]%~3cI!QHiQЫNlS,~C -t*aGuJǣq9YV^pZ*k Mbv5Ob"Wp!BR /N-"[ťKS2:Y*m$QcdMW+K;,MU97ba+m >T6 @9ذwkQlN53r̜P-(00z[\sN4d^Eny`OyCW8EYV3Zݹy>Szo{ix}ٛͭ m2[0=]_ףuz|Pi)G[k rيi?~&}ǔpo'SOO4Qڊu^{lN橧ŷdy4eՉ yfX2ŕ8rz# y*FdR^cFHr֭-%MeT1ߙukBVhݺАFֺt,-XhRc#^xf8_H܏ת(tC?]Z\YG"Nm^7M+`n8-%Pj,+sw7{ iuT}39옅"LHk,Z/=e&Ee]ӦBQdB][|nEkp7au$ʼnaYQ<ؖRw/+Gэ)k 1JDP#iĤR$k$ ,־BrH8JXhΛ΍hYɤHD+ĥ/>8/qȠ ;:>Fpe 6.^#MQPiW^&%09O4b!&鲐'K!ZmjzϨcYggDA@2( EiJ`- .BXaUBtӞm,%o 2Q1g x܁e0))% t9P@^}ֹcҤέќS#lğ=HGsA/!V uF*TUDAF|#-)*6[3b46Q Fcj 1d}F BwZ4^Ffr閊?.z֎^)[h4*J D¤T>*n&k ;p,3LepbZ000ڸ $L!5hBe!$oV= fHAvΔlT] v [,aԮ1'0𺐔heke/ki3EEY}QSޕ" y .*ybBˠ6o ̀+Sg.QOfB(vSgskR\(jzΥN E53j bf8Gbb1xr{'M@-&yzU7>&E1|s#6T#J!y7?0Ҳ/,VB5l&\:B,I$=L}0Lꨘ6\k"hT 뻋,ZPV>UM 5>75 T٦o6]Bwi`ЩdA;F'$7D Y^Mlwmm,9y`%Յu3y3 Q,ǒs[_VK[dW[g uWGXdM,^>ke`鎃竩L1"*Yآn|E{ [uEV4h)).fݝ6KfUW72϶ t/V1R$.]սDq#?~J2^m_Vvj|(aIyn>wVx?(5_8C ޑד{Zϖ=șw`MV\*&ߑȸ7?J^4]Ʈ#%]' 1-J>vfJk9SCxF,y_Wֹ Y2$rj|Fc~b7y9VvP@UNIǝ".gG8TS{[0U/ e.Qax+8%5SNն/-G L׾}bܙ-pvv@)WʅYDlUb}ʑ5jIc+F9_OeȷoS,'E,'E,'E,'m\s5Q 5MR=TvNx,`*2ڔS ""dJAX9xRC4^8uЇQ RՐu9*R1UrT!\9R]Qzi߳3ԔZhzjj%PV't3(@)ӪaRHZ hN >į?vmNNeKx \k}/u50yx yRI٣66yeThަL.s`Ȑ7ԞTJh3^',ŎuI0 >zmKtő$9.44*#0ZwSD5>IQ &h&.i{.tԌ /FgBB T~RY2k+w2h)K& Ɣ@RTVAkDr`TD}N?+SjVJ6r}!./LZJî^25UXK@ kRgkI ^fYmxVBޥH :uİ\WmMVxQK?u;M.Gsnse &1рF`Yݍ>$RGw+/`ԱK½f5_oU*Q3̜Q >p'pi^mG{S"Q̜&4\T&w͕QiFeO (J.䅄(_`Wݷ>ta3QHrPtQj1 Zŧ wbԞ8*It u.)7.*]^]}r?]8ZF'UGwStc=r@ȱx!ǰt1\۝(0h⎮qNT/ 9wHCg"hEWYوxtC~k`3'IE#>+Ix .9 ZV{C3]BU8x Gu`oynpa." }sEw{ߴ3\+ ׆VYS>7}c4=f2J`5/! ^\`&eMXe \(UN2)Z&RhD.' h(kلń.um˾UwWJ9 jLkˮQXo1\mO\ "KҠR:2앒%'Rrp, 姉e4v,nŲ{S_=g~2J>$T*/Mpt%!~-딙V; ܗ2( ,!MgO&:A+%,z!H(=ׁøuG@wnF*(/|9("RʳCDwR8vE3Z8K}p4 &"bЉ*FQ8-51T{J˵wc{9xoЍR8*"1I1h*9qQ$I0jB<>mG\C8[|&/ =&qbe0%u~(9r&Ĉ כ02Yn0`87$G՚?&T_q7yi9+w/4 40s7qr%IN).HchЀ,IJGWz' 5U^*ăW=@l -n0ΠUb!à3ROx&B s$2d1 MZZHbxsD $}4Oiy7 rqSDDd1#Ybξ,cflLUǂ×kೡ-$l&90~T&jHqЧ?7~yk3X]L?LwS_Tܡ".EKDTi]ϛC z\; nar{dħviu>%Se#)G U: ̿ل nM-< DƛnC(4=f3ZQ}X˹pk܅z@M{I P3Uɏ޻-h673f[h+FfDDm (hTyKK ;Z)u!zPGNhq?ބU q-5D@@\hZ/PEc_V2(]܆ +mn< QehƵcGe F t$' DfHktP # =9~ptW &KV5z@TtYΓll'Ԓ[ Ǻvq*22hk%8EIZ>LtQdKb޸:ѬL/;t 4a n2z~3Zm~p2n1˝{KpmM`O@Q`|pWW\e׼raE&"CJkb/eu'$9 ʣ-sjM)eZ]\;G)Xi:@Ѵ и|dxVܥjF{؅v*U$S}kEJEdtlĨOZ p^rnߛ:I݁ !I$5LY0Q8q,dAw 8]Vy$ȌѴgsY2uzb|+!RW[Mqͥxw *GTRҾ]B, I+e \`c(1:%b3^ȆeOӰS6i6%wچ.*ZNpʹ-'}"<9+}p'w=V_MZ[RY3`d\Xd jWHIEy|~N.&ӛIӊKvn|rB$t) JEh9$d NyTWS0ճQߔ.@w!JGQRqħ LZRk O ".p8o 2(S ӆ{!@X-}nf83g@@ܻ =1 ,0C/;K{k)$KRH=q)y&ЅPZ:wGK#U'7.`ƥcFw)VNH;"h#>Ʈ'3mq!ehAۣv5n€>dS_%VXup]-Sj)xx+  KQ:v(G%VF=r;h*&!]*7 }{s-8:n(?GIp$d%q1c}ΔA6/,dS<6}">2JNspOd]cTVeun^NH6z(RgHrsv{^N!gtɏJȖ({Vk7úWK9yRKADs hD"&$a-N'ےNMsP $2ȍ"姆6<7YKɚ0Wk.eFѤQTϢS0 ꆭ+Zkz!+HoߝIZF=D=2hg[ 7w/A].rxm.S$1RYV <"E5X:Fg^C|t6)h0¨Q%LH-bLq+uz9w|SZz=yQr~efKv3BwIP{APڠꛛ[]ah r?rTC$1q2 Q ~iS3;e/nC %אS𻱫UYa`y4,oB&WQahH`Lhi_A{ϨOsjB8௯yWL7~lPbܛ;>M,BkLXp}Q00؆uMC%L6TYc ̤!.'FFȄpёHenH2q|K{+PCij_z}+1st,h^VwҲ.p0׻M>,Qf+9I6x>GUɄ9rR}I{8_R݅zM&O3bX֓@e:sQGtKSޯM_ u"XITTvfMijLu$C xDۣWlmGosRI6/*K*qA, y4S}P#ؿ.#ja,0Xںa"NP" _'{#@i{0ܕn}¨Op#2 r(?ڶ<|92)#S{(r \ +Cmy&@Nbz.4=M)yd)ҿ b4sWr2Y8ai7&)0x# Tw^P-㴸Syɑo?N¯lS%'7xr5Cud7q)W2UK+q'k^i=xy4mۆ(T{R$IGdz䯯Us YV{Mt5ชl(sɿ{p_o7Sw8?0@7xZ!nb^njk! _\+|g5fTwVbFBt;o/PC^(b!RDy72dF) ~`~1 CYL-򱒲w'NϋO !~9h"\ܕ,_\$bG6r?A["i9OݎPbzA`s3 x%E#ƳLMHthEgnfZFȍg'p:P0+iИV?uZsAYT72q8!SR90h6 LbI3g୳L{F z< l&TY&M9jC35*"3(,kJ5*`{"W|ۣ-{Hߝ  l3ի|zU#q_GhVPf//&zAaNYXLN~?,FkfW'ѕA;<.,sxAr689#$AY夫nRR|D]PڅJo2زl=ߦˮDy$ה'q&?-XA|$P,`E)oyt0w觷 k|jOOwοq7t86fS~g`FLh[몼m:Y8w׷)\OrPblt|u˹_wޝ4ǰ'WP t4z9&T&cx3ZOYHǻy__5^g'b$1ߨ=8[oޜkB䢎1/PRlǢGaYHg]?@qF1E+瀼TIW`8fEY)(\5O1E}6_HH{IOvh>Gg>A=1, DZwqpD÷jF;c 4e#AK qkh[bb)o](:"^_vY)Mw[?Bd1rխjx5 h_6φ - R9ڻqK $rgY lւGԧi)F2s`۪%U>n{"N%]tKӲ 쏡 x|ZEɨ_ .AJb`\ R6v #0eú&¼UcyXNShow&kZzo%,?8޵5q+P_\}TlRs6UM]BRv߷AR҈$fJLRbu7@_$PEpTn~BOH@"$<{Wo@$Uw߁}Ã>l2-9<&wVJr,wi3 .1Gn@O2̞`:5L>=\.m/s2ǫJa8% 3r6Oc.qPSpN]"~NRvoka`Q g=yܤϴ𼁰$!3P[G O::!n0C>Nf7/H<OUDf \k9[5(J_aWIV0XfEq=86g' 獵\Đ ;Y'935_Z%29NOGLj]7#b.mǓ?4TSB)Ii׉|^o,BV\>875gxEew <չ-lwD)>.Qˤ!Qr|QPs oz9jȀdJ[:(<`"nq@ x`kgZQʻ,V\ FRSDG;㕯6EɵNPvg~TCf.,iQF3L@2PhzTAaZA_J^)`*- RI yW."q~P 8>IFǫ4F,1a&"4B 1;RNHΡ 5]E4I8A65xͼp'ʺ-gM$?4owBy"¢EYSFE\J BP-'Kt{/:}0Ar .@SUA8-I$ĽaP9]-}"S-%Ç@1ʼn]>_{3eN` =IsCи,EՏM٦~Nl.%}t%I'6bJ>a0eF[%F%\cVIi^E4qް&S*0 "W~Zu \1&m`dV#x^B`Nr.zu+-"3.ʫ^-.w/]΅AEp-?XP+Gc^̝bC -iPր9*֕@- Zi`՜̻4<ߦwNti̼g# cA@]IMLVz[>cɷ}CN:/D<\Sbe{e1ע,zrEh)ڜ?6M&蓞fH,R;|B|>6RXT@`<Ѧ$c5uo)1B( C%QF$ls oƸ4MS]\w>1zNP-jjSK'[I3 ?).xzl<r1+V*i{~Vg(SB_:}N~gāO5ǽȓ8^,C8/A? dP*P1~y28Z4!RU;Ji~r'k=wHiהRj0J틪r\5RĈ=1Xk!|⢼*. {)(ЏWk!W5@(4oaN@Pj' %m@ S'`Q M942mTSx\3*d?൥ą_oUG%J^%BRGAEFT&rpp(Q&,M*Z}|y"גNk3Ep8ob:JdxH;b"6LC̕ӫha߁S:#S"RB\eAUې|,TB$3~@[~)$dwǮ-aS8ZQiH˅ uR[EP )_>rIxd#l|Pp녂砈*OTL?꬙p 0+V΍]t GJ AI[k NAR$$!v. &)čМ!FguU7*ZCI"ڹwB(0sB\]ƅ:b(HsvXCT @#bY)-q29 [p)`-OqL&dD|fKzdQĥȁw`աe14Xᶛ/lm C0{!xL!6duģ-'BX}YNf8U Y:΍Qiw^0}/xnSLk˔ֽǻ,dtmZȲf\S-,@cûtrkmrH`. ,.H9,s!E'K[/QܰrwHwSD Tے,Z٩, _ w9e zdYMXV]42y$%|56ԑdC+йGZIXPZibK>B_F!1aThDS#q.y&dR a&Ⱦ}TM%D _ X4& tyӳUk2H#ƋR1Črӽ 󗑖s[>A$aB"Kr,3Jrɪ)H޾GPDU@0z!E{ZL.s2VYΚ3Xa^ɐHi~jn#&jpJ1a5*f>90ψ RRql?k- \[dmֺjxX[Dr!du֔s)5Rj`k-ubk8c5Ԉʦ>5[,*&7!4lt{ RGq]`݈ VamP+zervv[oOUhW(M} X[REˈtYN{S|3*KؤKm2[7g)-emglw4LMX;QJ.LG=V:i .p{PX8!Rv*"D:mw=h:JJq_*ZTBz&}s)zk66ڵ&Kbw>MçOtvO-'7MhhrӝD<=O]st+%F.,y.Pp#\mrܝ]Ār/,ql1=ᑋ_)r';QR1*26Wнa#oh1=$Rꌁ V Bi$5N:r$Q2%dΦqB8}uizS}:IgM.:,Xo"Q|w [OsG;>U>< h&B+0%&Yi A3,p,RDDeJ1D`rGr^YHXdê`sӠu $(gHS ۾ p>̤d/0׵Ɣp;? ᭢3ގv,X>ܹ9$e[.0~\!{h巏yLp,hBFiq`MXT8Fla~x~+W5)yaOIT"ܵDУB,?ӻʃX1vp{:t GVhﲫg}VF!'sBv Tզt3("hx\L-[i>W 8 B?{N D0uʜoy,1Db*/֩R݌`=Rޥ;? l|".N& U3xvA|6>ioTQ@QKLRK=7 "t c 2QFR$Nxʽh%H{Pr]J\'3 (Ŕz,\k)kAA~ÜANަbkz} 0a4)B#Hq ^>>  3{^mKīn;"<&*cS3f߷]Io6EQVWkn?W NV __>@1KbJLTHN)MR1- l8ikKfyXcOYoGv>w3r5]eBcYo?n ~s]W[Z\m&zX GqLFRBvSrp3XVXEJҜpWao]Njِ OQ}{@RuHMK2{KjY!BxޞryriE2 3offYT8JbJup]m>7rZ^BfB>,hf=zsG& a-%jr;1ple6w+mj&KئJ6UTЧO3~9W%-yA-sqPtR'  Fg<{ ~=?0Μ^"cw\J@66.}Kg Tu&|y P~:Km߆d` 04X!TÈgRItTS갇2&dCZY~Y(MmjMAtJEn? c{>'% y*S|>j4*gn<>nR1XPN>m-FZں8:Z64䕫hE˚ )3nnNXuDV{6'ˏhĦ< liOa_o;sؕSEI^0X̑:jXRxeq2oCbZz*-=k-厷0hujZoaTcEKXK]a܌#hz,ePhgIxCxK%CE u*`6>٫s,P7;맲k^NXz>ϗ=K:O?V!Z򢙁p AcpT D{I)ԈUq\g L KnjġhsrHl3 U:_ʒHY0Gd4jq6x'{| D1NoN *T-H4#$SS">M$|gY\Όd="'ϟ"#c| ҈䆂jVP$Zth"7 wxoiлm~qm7GX('Xy/O X]/G"bMG!WSP3Gx v$WϨLaA\!Rj@# ɒ4d86 sD"Fu5VJ$J1-G_HmCU鹰ƿ=sdĪ$"N#%q2mab`'8FP$2ZdR+E\ȍb!%ǐ?Mmj,:bpp1P#KPdb-$M0KRDDR"թnTBq{`fVmqd!1Ds/ 61T(3 )V(fiƉR Fg1.2mhET=,7+Ea/ʝo&tM8x($g"YP{f ܎>OJCX)!lPZ`l:,ר{LH/v8.'-q)Q?i"x&N<`K{D]nlQ{e=.& {W>8Lh@`چaP u,^Θô"dDXv“痝h\ݏtxlv0qvVޓ!U 7S ~iT3GrB * cM-1sw{@bt pnQXPN>mʴԴk^{hАWA:%#C([(Qp!4pmAC^Vtʱ6oUP^SoIQS{J5*=Z_{ q$j% B)iČ2*4"Fi`[IEU=X%MSΙ,KegQ1F5LXY$r9j1H,MNX p)j=},Wu]bpK`^AxbdWvII%FĢ8Y??}p('dw ;i_߸F32EݦHy~QSTVM}sbh&4Dr0% p~5lh !cHTK2ceR²A4D(I3$LQPTS"R$onN XQ>T+e]~ܬ?nLk]&H'b"x0Z-Sy"U}92eA͸ƣ8;Emj,.¿.ZjZZ8J5䢥筥i)}=%UjZT+EKZK9kSTV&]9z|NRRK`lF-(i>gw8IE32 v"p1ņLb14Ch3i(X~$HJ2̞vs3L(EƴuM~"4Ub-U4 taǾ'Q#Rp(j-J)ԈDR"Av@C3DFcOʡV:BHJCL5L"Z\*2!d ..x6WY:?f Z곉Xv4 J/8R1)Ti"qT'(SXWpNb]x FI8u-&p*^$#c| ₈E^d+Fz+q9{l t8XKW{Xb;к_% zt81CXJJޤz Gc OzSpEm߷6my&zՓvh:7KM`a TΪU;weܾ YD$+NpcE"W"4q*4id04He[Ѓ~3),Ёn&[0$X**1{S&:Oo6x.] Ap1: (#,2c 82#b#hx" ď@Pܶ}Òs8_ EgU?8i@5ט|'X<{tROF+^Ո $ZGuu8消!$m9-1`80blN5jByL+t ~o=@,?[xH y*SM|-.:uQY)ںJ9Z64䕫hNQ9кҀ-.:urϺW;Z64䕫hc"H0ʆ.T XK͔Y ƽۙ){7bܳ(,gH15;{/6wgE3bg{jw ,ӕ2t$i+F\_նs7i`&7[N7{s4/js|督(ImKFvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005102361415130167570017704 0ustar rootrootJan 09 10:45:53 crc systemd[1]: Starting Kubernetes Kubelet... Jan 09 10:45:53 crc restorecon[4692]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:53 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 09 10:45:54 crc restorecon[4692]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 09 10:45:54 crc kubenswrapper[4727]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.685121 4727 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688207 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688230 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688235 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688240 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688246 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688250 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688255 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688260 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688264 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688270 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688274 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688280 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688296 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688305 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688313 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688319 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688326 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688333 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688339 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688344 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688349 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688354 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688358 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688363 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688367 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688372 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688377 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688381 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688385 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688390 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688395 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688399 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688405 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688409 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688414 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688420 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688425 4727 feature_gate.go:330] unrecognized feature gate: Example Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688430 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688435 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688440 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688445 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688450 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688455 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688460 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688465 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688470 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688476 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688481 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688487 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688492 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688496 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688501 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688505 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688529 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688533 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688538 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688543 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688548 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688553 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688557 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688563 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688571 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688577 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688581 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688586 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688591 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688597 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688602 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688607 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688611 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.688616 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688719 4727 flags.go:64] FLAG: --address="0.0.0.0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688729 4727 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688738 4727 flags.go:64] FLAG: --anonymous-auth="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688744 4727 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688753 4727 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688759 4727 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688765 4727 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688771 4727 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688777 4727 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688782 4727 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688788 4727 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688793 4727 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688799 4727 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688806 4727 flags.go:64] FLAG: --cgroup-root="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688811 4727 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688816 4727 flags.go:64] FLAG: --client-ca-file="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688821 4727 flags.go:64] FLAG: --cloud-config="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688826 4727 flags.go:64] FLAG: --cloud-provider="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688830 4727 flags.go:64] FLAG: --cluster-dns="[]" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688837 4727 flags.go:64] FLAG: --cluster-domain="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688842 4727 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688847 4727 flags.go:64] FLAG: --config-dir="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688851 4727 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688857 4727 flags.go:64] FLAG: --container-log-max-files="5" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688863 4727 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688869 4727 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688874 4727 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688880 4727 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688885 4727 flags.go:64] FLAG: --contention-profiling="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688890 4727 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688895 4727 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688900 4727 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688905 4727 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688912 4727 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688917 4727 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688922 4727 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688927 4727 flags.go:64] FLAG: --enable-load-reader="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688932 4727 flags.go:64] FLAG: --enable-server="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688938 4727 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688944 4727 flags.go:64] FLAG: --event-burst="100" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688949 4727 flags.go:64] FLAG: --event-qps="50" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688955 4727 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688960 4727 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688965 4727 flags.go:64] FLAG: --eviction-hard="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688971 4727 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688977 4727 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688981 4727 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688987 4727 flags.go:64] FLAG: --eviction-soft="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688992 4727 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.688997 4727 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689003 4727 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689007 4727 flags.go:64] FLAG: --experimental-mounter-path="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689011 4727 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689016 4727 flags.go:64] FLAG: --fail-swap-on="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689020 4727 flags.go:64] FLAG: --feature-gates="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689025 4727 flags.go:64] FLAG: --file-check-frequency="20s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689030 4727 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689034 4727 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689039 4727 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689043 4727 flags.go:64] FLAG: --healthz-port="10248" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689050 4727 flags.go:64] FLAG: --help="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689054 4727 flags.go:64] FLAG: --hostname-override="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689059 4727 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689063 4727 flags.go:64] FLAG: --http-check-frequency="20s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689067 4727 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689071 4727 flags.go:64] FLAG: --image-credential-provider-config="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689076 4727 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689080 4727 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689084 4727 flags.go:64] FLAG: --image-service-endpoint="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689088 4727 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689093 4727 flags.go:64] FLAG: --kube-api-burst="100" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689097 4727 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689101 4727 flags.go:64] FLAG: --kube-api-qps="50" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689108 4727 flags.go:64] FLAG: --kube-reserved="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689112 4727 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689117 4727 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689121 4727 flags.go:64] FLAG: --kubelet-cgroups="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689126 4727 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689130 4727 flags.go:64] FLAG: --lock-file="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689134 4727 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689138 4727 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689142 4727 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689149 4727 flags.go:64] FLAG: --log-json-split-stream="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689154 4727 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689158 4727 flags.go:64] FLAG: --log-text-split-stream="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689163 4727 flags.go:64] FLAG: --logging-format="text" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689167 4727 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689172 4727 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689177 4727 flags.go:64] FLAG: --manifest-url="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689181 4727 flags.go:64] FLAG: --manifest-url-header="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689187 4727 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689192 4727 flags.go:64] FLAG: --max-open-files="1000000" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689199 4727 flags.go:64] FLAG: --max-pods="110" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689204 4727 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689209 4727 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689215 4727 flags.go:64] FLAG: --memory-manager-policy="None" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689219 4727 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689225 4727 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689230 4727 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689235 4727 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689247 4727 flags.go:64] FLAG: --node-status-max-images="50" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689252 4727 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689257 4727 flags.go:64] FLAG: --oom-score-adj="-999" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689262 4727 flags.go:64] FLAG: --pod-cidr="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689267 4727 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689275 4727 flags.go:64] FLAG: --pod-manifest-path="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689280 4727 flags.go:64] FLAG: --pod-max-pids="-1" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689286 4727 flags.go:64] FLAG: --pods-per-core="0" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689291 4727 flags.go:64] FLAG: --port="10250" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689296 4727 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689301 4727 flags.go:64] FLAG: --provider-id="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689305 4727 flags.go:64] FLAG: --qos-reserved="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689310 4727 flags.go:64] FLAG: --read-only-port="10255" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689315 4727 flags.go:64] FLAG: --register-node="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689320 4727 flags.go:64] FLAG: --register-schedulable="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689325 4727 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689333 4727 flags.go:64] FLAG: --registry-burst="10" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689338 4727 flags.go:64] FLAG: --registry-qps="5" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689342 4727 flags.go:64] FLAG: --reserved-cpus="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689347 4727 flags.go:64] FLAG: --reserved-memory="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689354 4727 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689360 4727 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689365 4727 flags.go:64] FLAG: --rotate-certificates="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689370 4727 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689376 4727 flags.go:64] FLAG: --runonce="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689381 4727 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689387 4727 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689392 4727 flags.go:64] FLAG: --seccomp-default="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689397 4727 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689402 4727 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689407 4727 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689412 4727 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689417 4727 flags.go:64] FLAG: --storage-driver-password="root" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689422 4727 flags.go:64] FLAG: --storage-driver-secure="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689427 4727 flags.go:64] FLAG: --storage-driver-table="stats" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689432 4727 flags.go:64] FLAG: --storage-driver-user="root" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689437 4727 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689442 4727 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689448 4727 flags.go:64] FLAG: --system-cgroups="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689453 4727 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689461 4727 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689466 4727 flags.go:64] FLAG: --tls-cert-file="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689471 4727 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689477 4727 flags.go:64] FLAG: --tls-min-version="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689482 4727 flags.go:64] FLAG: --tls-private-key-file="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689487 4727 flags.go:64] FLAG: --topology-manager-policy="none" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689492 4727 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689497 4727 flags.go:64] FLAG: --topology-manager-scope="container" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689501 4727 flags.go:64] FLAG: --v="2" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689528 4727 flags.go:64] FLAG: --version="false" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689535 4727 flags.go:64] FLAG: --vmodule="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689541 4727 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.689547 4727 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689881 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689913 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689920 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689926 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689935 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689940 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689946 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689951 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689956 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689960 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689964 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689969 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689973 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689979 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689985 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689991 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.689996 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690002 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690006 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690011 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690016 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690022 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690027 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690031 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690036 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690039 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690044 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690048 4727 feature_gate.go:330] unrecognized feature gate: Example Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690052 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690057 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690061 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690066 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690072 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690077 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690083 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690089 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690094 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690099 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690104 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690108 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690118 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690123 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690127 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690131 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690136 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690140 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690144 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690149 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690153 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690158 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690163 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690167 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690172 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690176 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690180 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690184 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690189 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690193 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690197 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690201 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690205 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690209 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690213 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690217 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690222 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690227 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690232 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690236 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690241 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690245 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.690249 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.690289 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.703297 4727 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.703351 4727 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703470 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703485 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703496 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703505 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703541 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703553 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703563 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703572 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703580 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703588 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703598 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703611 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703622 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703631 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703640 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703650 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703658 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703667 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703675 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703683 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703690 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703698 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703706 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703713 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703721 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703729 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703737 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703744 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703752 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703760 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703768 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703778 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703786 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703794 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703802 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703810 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703817 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703825 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703833 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703841 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703851 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703859 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703867 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703878 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703887 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703896 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703904 4727 feature_gate.go:330] unrecognized feature gate: Example Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703912 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703922 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703931 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703940 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703949 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703957 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703965 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703973 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703982 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703990 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.703997 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704005 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704013 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704020 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704028 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704035 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704043 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704051 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704059 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704067 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704074 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704082 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704090 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704098 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.704111 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704349 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704366 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704376 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704384 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704394 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704406 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704415 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704425 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704433 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704444 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704454 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704464 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704475 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704486 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704496 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704536 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704546 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704555 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704566 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704576 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704585 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704595 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704606 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704616 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704626 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704637 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704646 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704656 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704666 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704676 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704686 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704695 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704705 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704715 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704725 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704735 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704745 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704754 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704763 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704773 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704784 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704796 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704808 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704820 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704832 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704842 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704854 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704866 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704876 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704887 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704897 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704909 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704917 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704924 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704934 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704943 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704954 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704964 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704974 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.704988 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705001 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705012 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705024 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705034 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705045 4727 feature_gate.go:330] unrecognized feature gate: Example Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705054 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705064 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705075 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705084 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705094 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.705104 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.705119 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.705438 4727 server.go:940] "Client rotation is on, will bootstrap in background" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.714391 4727 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.714613 4727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.715588 4727 server.go:997] "Starting client certificate rotation" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.715619 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.716135 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 20:33:49.931372023 +0000 UTC Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.716288 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.724558 4727 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.726684 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.727572 4727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.739440 4727 log.go:25] "Validated CRI v1 runtime API" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.757264 4727 log.go:25] "Validated CRI v1 image API" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.759411 4727 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.762960 4727 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-09-10-41-49-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.763012 4727 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.780252 4727 manager.go:217] Machine: {Timestamp:2026-01-09 10:45:54.778563188 +0000 UTC m=+0.228468019 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a4360e9d-d030-43eb-b040-259eb77bd39d BootID:efb1b54a-bec3-40af-877b-b80c0cec5403 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1b:7d:89 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1b:7d:89 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:88:e7:65 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:39:23:73 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e1:43:ca Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:74:2a:16 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9a:38:50:2a:51:7a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:9e:fc:33:33:26 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.780610 4727 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.780907 4727 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.782217 4727 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.782710 4727 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.782785 4727 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.783560 4727 topology_manager.go:138] "Creating topology manager with none policy" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.783628 4727 container_manager_linux.go:303] "Creating device plugin manager" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.784116 4727 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.784550 4727 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.785185 4727 state_mem.go:36] "Initialized new in-memory state store" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.785362 4727 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.786906 4727 kubelet.go:418] "Attempting to sync node with API server" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.786946 4727 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.786992 4727 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.787018 4727 kubelet.go:324] "Adding apiserver pod source" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.787038 4727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.789438 4727 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.789670 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.789779 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.789826 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.790002 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.790038 4727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.791362 4727 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792293 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792343 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792359 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792374 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792398 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792414 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792429 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792452 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792468 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792484 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792503 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792546 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.792873 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.793466 4727 server.go:1280] "Started kubelet" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.793989 4727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.793990 4727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.794379 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.794832 4727 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 09 10:45:54 crc systemd[1]: Started Kubernetes Kubelet. Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.795619 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18890a35c624357a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 10:45:54.793436538 +0000 UTC m=+0.243341359,LastTimestamp:2026-01-09 10:45:54.793436538 +0000 UTC m=+0.243341359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.796639 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.796710 4727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.796973 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:40:39.511893293 +0000 UTC Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.797074 4727 server.go:460] "Adding debug handlers to kubelet server" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.797446 4727 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.797478 4727 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.797728 4727 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.797892 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.799669 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.799770 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.797364 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.803767 4727 factory.go:55] Registering systemd factory Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.804365 4727 factory.go:221] Registration of the systemd container factory successfully Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805673 4727 factory.go:153] Registering CRI-O factory Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805728 4727 factory.go:221] Registration of the crio container factory successfully Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805811 4727 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805839 4727 factory.go:103] Registering Raw factory Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.805862 4727 manager.go:1196] Started watching for new ooms in manager Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.806975 4727 manager.go:319] Starting recovery of all containers Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811373 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811445 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811461 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811475 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811488 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811502 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811536 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811553 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811570 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811584 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811595 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811606 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811618 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811633 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811644 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811653 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811663 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811675 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811685 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811696 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811709 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811720 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811734 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811749 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811761 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811776 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811792 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811809 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811823 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811838 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811850 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811868 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811879 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811892 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811907 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811920 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811937 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811950 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811965 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811983 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.811997 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812047 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812064 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812077 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812091 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812107 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812122 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812145 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812159 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812174 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812378 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812391 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812411 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812426 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812444 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812461 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812474 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812487 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812503 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812597 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812610 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812624 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812636 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812648 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812668 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812681 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812694 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812717 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812729 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812743 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812758 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812775 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812789 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812803 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812817 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812829 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812843 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812855 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812868 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812882 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.812897 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815851 4727 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815905 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815926 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815940 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815953 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815966 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815986 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.815999 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816012 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816024 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816036 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816049 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816063 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816076 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816090 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816102 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816113 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816126 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816138 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816151 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816166 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816180 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816195 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816208 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816265 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816280 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816294 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816309 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816323 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816338 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816351 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816365 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816379 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816394 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816415 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816429 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816443 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816456 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816469 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816482 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816496 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816526 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816542 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816554 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816566 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816579 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816593 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816605 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816616 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816628 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816642 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816655 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816674 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816687 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816707 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816720 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816734 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816746 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816759 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816774 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816788 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816801 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816814 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816825 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816838 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816849 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816861 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816873 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816888 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816900 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816914 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816927 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816939 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816953 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816967 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816982 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.816996 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817010 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817024 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817039 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817052 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817066 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817078 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817092 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817108 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817122 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817140 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817153 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817168 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817182 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817198 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817211 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817225 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817237 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817251 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817264 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817277 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817291 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817307 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817320 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817335 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817352 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817368 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817381 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817394 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817408 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817423 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817436 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817449 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817463 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817479 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817493 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817532 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817545 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817558 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817573 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817587 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817600 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817622 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817637 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817652 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817665 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817679 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817693 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817707 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817723 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817737 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817749 4727 reconstruct.go:97] "Volume reconstruction finished" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.817758 4727 reconciler.go:26] "Reconciler: start to sync state" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.832532 4727 manager.go:324] Recovery completed Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.845394 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.848007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.848087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.848101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.850111 4727 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.850147 4727 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.850174 4727 state_mem.go:36] "Initialized new in-memory state store" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.857108 4727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.858911 4727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.858959 4727 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.859001 4727 kubelet.go:2335] "Starting kubelet main sync loop" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.859049 4727 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 09 10:45:54 crc kubenswrapper[4727]: W0109 10:45:54.860733 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.860811 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.864980 4727 policy_none.go:49] "None policy: Start" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.866136 4727 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.866177 4727 state_mem.go:35] "Initializing new in-memory state store" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.900067 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.912021 4727 manager.go:334] "Starting Device Plugin manager" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.912626 4727 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.912650 4727 server.go:79] "Starting device plugin registration server" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913085 4727 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913302 4727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913479 4727 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913664 4727 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.913701 4727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.922559 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.960144 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.960357 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.961928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.961970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.962025 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.962227 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.962452 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.962537 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963523 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963811 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963855 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.963815 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.964797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.964842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.964852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965855 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965909 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.965932 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.966851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.966869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.966878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967193 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967449 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967556 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.967989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968278 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968338 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.968573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.969062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.969100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:54 crc kubenswrapper[4727]: I0109 10:45:54.969109 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:54 crc kubenswrapper[4727]: E0109 10:45:54.999404 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.014255 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.015749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.015793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.015808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.015841 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.016448 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019043 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019133 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019250 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019275 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019299 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019321 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019392 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019441 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.019474 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.120947 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121042 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121064 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121136 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121159 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121186 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121207 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121226 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121262 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121302 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121329 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121372 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121342 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121353 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121340 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121440 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121380 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121472 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121495 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121560 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121581 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121619 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.121706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.217018 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.218575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.218692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.218763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.218873 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.219959 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.301698 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.316634 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.325918 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.342666 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8a5ea4bfbc3a8b7ffc0327f4b1cc61a408d7bd71f06f4ea3f10f162086027401 WatchSource:0}: Error finding container 8a5ea4bfbc3a8b7ffc0327f4b1cc61a408d7bd71f06f4ea3f10f162086027401: Status 404 returned error can't find the container with id 8a5ea4bfbc3a8b7ffc0327f4b1cc61a408d7bd71f06f4ea3f10f162086027401 Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.348643 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-871005b0bddedff417ced0417e946d10e74a9563fce945693074f5cb1a5902a4 WatchSource:0}: Error finding container 871005b0bddedff417ced0417e946d10e74a9563fce945693074f5cb1a5902a4: Status 404 returned error can't find the container with id 871005b0bddedff417ced0417e946d10e74a9563fce945693074f5cb1a5902a4 Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.350871 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-26297666a541cd7b52fa4094f94c6fbd5d9d215d9bad91ab2cbc1fae202bdce8 WatchSource:0}: Error finding container 26297666a541cd7b52fa4094f94c6fbd5d9d215d9bad91ab2cbc1fae202bdce8: Status 404 returned error can't find the container with id 26297666a541cd7b52fa4094f94c6fbd5d9d215d9bad91ab2cbc1fae202bdce8 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.360066 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.368459 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.394527 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9a438e5eafbbd04b64e43f6992e3953f3cadd1adb18327d31a545fb3daba77cb WatchSource:0}: Error finding container 9a438e5eafbbd04b64e43f6992e3953f3cadd1adb18327d31a545fb3daba77cb: Status 404 returned error can't find the container with id 9a438e5eafbbd04b64e43f6992e3953f3cadd1adb18327d31a545fb3daba77cb Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.399930 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.620279 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.622074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.622112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.622122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.622147 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.622631 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.749123 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.749207 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.785869 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18890a35c624357a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 10:45:54.793436538 +0000 UTC m=+0.243341359,LastTimestamp:2026-01-09 10:45:54.793436538 +0000 UTC m=+0.243341359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.795650 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.797805 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:29:38.658333949 +0000 UTC Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.864480 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.864618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8a5ea4bfbc3a8b7ffc0327f4b1cc61a408d7bd71f06f4ea3f10f162086027401"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.866101 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03" exitCode=0 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.866174 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.866218 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"238f0bffda992ac4f0ab43ed575c6762427e33280c6c9900c98b77c6791dcaec"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.866366 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.867408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.867436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.867445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.868542 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ae5ff1a01059e577d8aa9eca11df8a4d2d3d74cdfbb0fdb58acaa154cae9e013" exitCode=0 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.868587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ae5ff1a01059e577d8aa9eca11df8a4d2d3d74cdfbb0fdb58acaa154cae9e013"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.868617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9a438e5eafbbd04b64e43f6992e3953f3cadd1adb18327d31a545fb3daba77cb"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.868697 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869024 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.869647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.870438 4727 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421" exitCode=0 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.870469 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.870518 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"871005b0bddedff417ced0417e946d10e74a9563fce945693074f5cb1a5902a4"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.870587 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871695 4727 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f" exitCode=0 Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871729 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"26297666a541cd7b52fa4094f94c6fbd5d9d215d9bad91ab2cbc1fae202bdce8"} Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.871806 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.873138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.873163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:55 crc kubenswrapper[4727]: I0109 10:45:55.873172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:55 crc kubenswrapper[4727]: W0109 10:45:55.948324 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:55 crc kubenswrapper[4727]: E0109 10:45:55.948402 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:56 crc kubenswrapper[4727]: E0109 10:45:56.201854 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Jan 09 10:45:56 crc kubenswrapper[4727]: W0109 10:45:56.225477 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:56 crc kubenswrapper[4727]: E0109 10:45:56.225677 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:56 crc kubenswrapper[4727]: W0109 10:45:56.277246 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 09 10:45:56 crc kubenswrapper[4727]: E0109 10:45:56.277355 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.422848 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.424634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.424704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.424716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.424750 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.798434 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:54:21.024252827 +0000 UTC Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.798536 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h8m24.22571963s for next certificate rotation Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.805689 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.878745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.878805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.878824 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.878994 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.880602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.880646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.880661 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.885569 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.885636 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.885663 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.885710 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.886678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.886719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.886731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.890792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.890849 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.890868 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.890878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.892301 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8ee458da9a63c683c7e9c63e784f29b9752498c2430ccdceff10b1985783b0cd" exitCode=0 Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.892365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8ee458da9a63c683c7e9c63e784f29b9752498c2430ccdceff10b1985783b0cd"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.892492 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.893422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.893451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.893461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.894306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9"} Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.894444 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.895232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.895260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:56 crc kubenswrapper[4727]: I0109 10:45:56.895268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.901996 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c"} Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.902100 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.903112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.903149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.903159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.904858 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5956cdf046241221791e256787fb6607ebd743de5040a84ee17dd9e976c21cba" exitCode=0 Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.904932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5956cdf046241221791e256787fb6607ebd743de5040a84ee17dd9e976c21cba"} Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.904957 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.904997 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905021 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905069 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.905996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:57 crc kubenswrapper[4727]: I0109 10:45:57.906127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"840208c8cf6ade2126a2c30c797cb923af67a7e913daba30130f9a051f2a32e3"} Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910845 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910867 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8328c42516f23ed81dfa93bfedb532ce8ab4b5cb0d090f1010fa6715017faaa9"} Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2c74d5e83ddefdc953d5796d80f0b900e7c7cea7faa0bfbab4acd3cac387359"} Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910607 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.910893 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"19ffd327efb1695fd60992f7915bffc10705585158d64e224e66b7802c387a5f"} Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.911914 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.911952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:58 crc kubenswrapper[4727]: I0109 10:45:58.911963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.919951 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"643f923f7389af30733922fbb5054b81c61914e8aceef8ae1f7b74e1a5b88ac3"} Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.920050 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.920165 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:45:59 crc kubenswrapper[4727]: I0109 10:45:59.921596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.922783 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.923909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.923970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.923980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:00 crc kubenswrapper[4727]: I0109 10:46:00.992569 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.667330 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.925580 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.926763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.926798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:01 crc kubenswrapper[4727]: I0109 10:46:01.926809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.210598 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.210789 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.212147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.212183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.212197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.616014 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.616191 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.617779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.617853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.617900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.711955 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.927952 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.927966 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:02 crc kubenswrapper[4727]: I0109 10:46:02.929464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.132982 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.133155 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.134485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.134577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:03 crc kubenswrapper[4727]: I0109 10:46:03.134591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.416107 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.416354 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.418569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.418640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:04 crc kubenswrapper[4727]: I0109 10:46:04.418654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:04 crc kubenswrapper[4727]: E0109 10:46:04.922732 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 09 10:46:05 crc kubenswrapper[4727]: I0109 10:46:05.616201 4727 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 10:46:05 crc kubenswrapper[4727]: I0109 10:46:05.616334 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.219207 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.219300 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.422578 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.422786 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.424338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.424384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.424398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:06 crc kubenswrapper[4727]: E0109 10:46:06.426648 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.720248 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.731004 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.797413 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 09 10:46:06 crc kubenswrapper[4727]: E0109 10:46:06.807397 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.938213 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.939686 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.939738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.939750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:06 crc kubenswrapper[4727]: I0109 10:46:06.943300 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.290055 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.290147 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.715928 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]log ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]etcd ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/generic-apiserver-start-informers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-filter ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-apiextensions-informers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-apiextensions-controllers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/crd-informer-synced ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-system-namespaces-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 09 10:46:07 crc kubenswrapper[4727]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/bootstrap-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/start-kube-aggregator-informers ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-registration-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-discovery-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]autoregister-completion ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-openapi-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 09 10:46:07 crc kubenswrapper[4727]: livez check failed Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.716007 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.940873 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.941915 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.941982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:07 crc kubenswrapper[4727]: I0109 10:46:07.941996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.026792 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.028059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.028130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.028148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.028181 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.944298 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.945650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.945717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:08 crc kubenswrapper[4727]: I0109 10:46:08.945743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:10 crc kubenswrapper[4727]: I0109 10:46:10.940484 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 09 10:46:10 crc kubenswrapper[4727]: I0109 10:46:10.940613 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 09 10:46:10 crc kubenswrapper[4727]: I0109 10:46:10.989375 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.002914 4727 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.588837 4727 csr.go:261] certificate signing request csr-x4pgc is approved, waiting to be issued Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.600414 4727 csr.go:257] certificate signing request csr-x4pgc is issued Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.708888 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.709114 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.710558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.710596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.710609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.728675 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.953030 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.954177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.954224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:11 crc kubenswrapper[4727]: I0109 10:46:11.954238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.286443 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288318 4727 trace.go:236] Trace[46291964]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 10:45:59.243) (total time: 13044ms): Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[46291964]: ---"Objects listed" error: 13044ms (10:46:12.288) Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[46291964]: [13.044801529s] [13.044801529s] END Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288364 4727 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288456 4727 trace.go:236] Trace[23214324]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 10:45:57.913) (total time: 14374ms): Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[23214324]: ---"Objects listed" error: 14374ms (10:46:12.288) Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[23214324]: [14.374663778s] [14.374663778s] END Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288481 4727 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288905 4727 trace.go:236] Trace[510296964]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 10:45:58.133) (total time: 14155ms): Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[510296964]: ---"Objects listed" error: 14155ms (10:46:12.288) Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[510296964]: [14.155415621s] [14.155415621s] END Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.288921 4727 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.290838 4727 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.291991 4727 trace.go:236] Trace[559012523]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Jan-2026 10:45:59.052) (total time: 13239ms): Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[559012523]: ---"Objects listed" error: 13239ms (10:46:12.291) Jan 09 10:46:12 crc kubenswrapper[4727]: Trace[559012523]: [13.239870398s] [13.239870398s] END Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.292012 4727 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.601833 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-09 10:41:11 +0000 UTC, rotation deadline is 2026-09-29 11:01:02.189576932 +0000 UTC Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.601879 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6312h14m49.58770003s for next certificate rotation Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.621067 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.632454 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.941375 4727 apiserver.go:52] "Watching apiserver" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.948773 4727 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.949199 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.949479 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.950937 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.951041 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.951147 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.951147 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.951187 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.951264 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.951340 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.951253 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.952083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.954388 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.954451 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.954395 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.959383 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.959598 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.960018 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.960593 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.960663 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.960779 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.969173 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.975850 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:46:12 crc kubenswrapper[4727]: E0109 10:46:12.978425 4727 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.984973 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.985191 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55978->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.985265 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55978->192.168.126.11:17697: read: connection reset by peer" Jan 09 10:46:12 crc kubenswrapper[4727]: I0109 10:46:12.998790 4727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.004886 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.019943 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.034857 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.035644 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042094 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042157 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042187 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042211 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042242 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042267 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042290 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042315 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042342 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042367 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042390 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042419 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042441 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042461 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042485 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042527 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042552 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042625 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042657 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042684 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042751 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042781 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042806 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042834 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042858 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042907 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042887 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042961 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.042989 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043013 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043064 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043089 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043115 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043139 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043164 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043197 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043220 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043271 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043298 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043325 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043350 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043374 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043398 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043421 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043523 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043546 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043570 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043572 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043571 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043624 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043652 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043682 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043705 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043727 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043731 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043833 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043953 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.043964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044123 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044213 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044261 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044347 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044379 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044526 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044679 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044796 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.044819 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.544796197 +0000 UTC m=+18.994700978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.044966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.045138 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.045172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.045278 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046034 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046071 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046125 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046151 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046174 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046194 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046213 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046232 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046250 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046270 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046289 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046309 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046329 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046346 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046364 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046380 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046413 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046430 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046449 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046464 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046695 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046727 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046744 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046761 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046780 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046797 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046812 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046829 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046847 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046867 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046889 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046904 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046919 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046936 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.046970 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047019 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047052 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047070 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047085 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047102 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047099 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047118 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047136 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047293 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047315 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047333 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047350 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047369 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047393 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047427 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047445 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047463 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047481 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047499 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047531 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047549 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047567 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047585 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047622 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047650 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047767 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047800 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047822 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047846 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047867 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047885 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047901 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047945 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047965 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047948 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047993 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.047984 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048077 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048114 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048139 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048159 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048179 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048208 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048229 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048235 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048252 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048282 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048304 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048330 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048355 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048382 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048401 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048423 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048451 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048475 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048497 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048543 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048575 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048614 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048644 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048668 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048715 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048740 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048761 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048782 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048837 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048899 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048918 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048939 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048971 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048993 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049018 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049041 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049064 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049087 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049129 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049152 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049177 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049195 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049215 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049234 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049253 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049275 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049298 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049319 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049337 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049396 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049418 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049435 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049453 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049473 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049492 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049527 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049548 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049568 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049586 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049606 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049665 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049699 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049911 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049958 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049978 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.049998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050081 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050150 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050261 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050276 4727 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050289 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050300 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050311 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050322 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050333 4727 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050344 4727 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050355 4727 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050366 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050379 4727 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050392 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050403 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050414 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050425 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050437 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050450 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050463 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050483 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050493 4727 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050524 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050535 4727 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050545 4727 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050556 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050567 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050578 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050590 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050600 4727 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050611 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050625 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050635 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050645 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050655 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050667 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050678 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050687 4727 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050698 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.048446 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.050720 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.050792 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051805 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051861 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.051903 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.551849888 +0000 UTC m=+19.001754669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052393 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052777 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051045 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051044 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051061 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051470 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.052900 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051577 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051729 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.053007 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.053758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054001 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054423 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054539 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054889 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.054953 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055227 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055287 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055323 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055521 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055578 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055591 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.055654 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.056050 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057194 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057636 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057750 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057815 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.057848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.058167 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.058430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.058981 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059149 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059217 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059301 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059547 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.059710 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.060042 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.060608 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.060793 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.060958 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061264 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.061376 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061400 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.061455 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.561433921 +0000 UTC m=+19.011338702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061735 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.061792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.062984 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.067106 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.067394 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.067393 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.067723 4727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.068313 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.068467 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.068587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.068936 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069035 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069369 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069420 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069425 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.069569 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.070445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.070693 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.070736 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.076094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.076350 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.076705 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.076737 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.076752 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.076824 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.576800648 +0000 UTC m=+19.026705429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.051025 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.078817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.078947 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.079488 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.079584 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.079888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.078168 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080352 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080584 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080785 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.080797 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.078618 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081203 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081392 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081541 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081795 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.081124 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.082345 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.082402 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.083075 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.083108 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.083128 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.083202 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:13.583177529 +0000 UTC m=+19.033082320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.083286 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.083610 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084496 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084619 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084920 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084922 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.084971 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.085424 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.085830 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.086112 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.086477 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.086447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.087220 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.087478 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.087649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.087842 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.088067 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.089635 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.091110 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.091807 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.092017 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093053 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093246 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093255 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093493 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093605 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093705 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095834 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.093964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094193 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094423 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094500 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.094845 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095135 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095288 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095537 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095461 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.095753 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096027 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096035 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096147 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096468 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096638 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.096959 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.097391 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.097558 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.097912 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.098524 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.098841 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.099169 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.099255 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.099866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.103075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.103232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.103299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.104888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.105010 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.104974 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.105097 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.105331 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.105832 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.106775 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.107327 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.113865 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.126067 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.127145 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.128593 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.133144 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.133698 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.138050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.149399 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152922 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152951 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152968 4727 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152960 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.152985 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153059 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153072 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153085 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153097 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153110 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153123 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153136 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153150 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153164 4727 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153175 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153185 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153196 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153210 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153223 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153236 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153252 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153266 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153281 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153294 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153307 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153320 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153411 4727 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153430 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153446 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153463 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153477 4727 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153493 4727 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153527 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153546 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153564 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153580 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153596 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153612 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153628 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153642 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153657 4727 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153670 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153686 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153701 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153715 4727 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153730 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153745 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153759 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153772 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153788 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153804 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153818 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153835 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153850 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153864 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153877 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153891 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153905 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153919 4727 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153933 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153947 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153961 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153975 4727 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.153990 4727 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154004 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154018 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154034 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154048 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154062 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154075 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154090 4727 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154103 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154117 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154131 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154144 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154158 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154172 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154190 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154205 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154218 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154232 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154246 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154260 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154274 4727 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154291 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154305 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154319 4727 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154335 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154349 4727 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154365 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154383 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154401 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154417 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154436 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154450 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154462 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154476 4727 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154490 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154524 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154541 4727 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154555 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154569 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154582 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154595 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154607 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154621 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154633 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154646 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154657 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154671 4727 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154683 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154694 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154706 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154718 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154730 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154745 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154757 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154771 4727 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154792 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154804 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154818 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154829 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154841 4727 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154855 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154868 4727 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154880 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154892 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154906 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154918 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154929 4727 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154943 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154958 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154973 4727 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154986 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.154999 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155012 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155024 4727 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155037 4727 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155050 4727 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155064 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155075 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155091 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155105 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155118 4727 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155132 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155145 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155159 4727 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155175 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155189 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155201 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155216 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155230 4727 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155244 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155257 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155271 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155285 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155300 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.155315 4727 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.160902 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.174292 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.187789 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.273032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.281633 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.290129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.560866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.560981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.561022 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.560995607 +0000 UTC m=+20.010900388 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.561098 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.561162 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.561144112 +0000 UTC m=+20.011048983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.662337 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.662386 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.662408 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662465 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662533 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.662519667 +0000 UTC m=+20.112424448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662538 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662559 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662572 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662604 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.662593639 +0000 UTC m=+20.112498420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662758 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662805 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662824 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: E0109 10:46:13.662918 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:14.662890447 +0000 UTC m=+20.112795258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.966812 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a54aae95b4a0d312469fe6ef388542dce7d6e3dad660e3d74aacc03dc9e16ac2"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.970048 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.970109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.970123 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"17aade6e432352668b3d4a0e36e7c1205d8e474dcc8a7f099521b96e7937d8fb"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.972265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.972352 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e5b3d69113994016b4a5103d68234de99d20cbdb98841eb85df17dc12b939114"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.974112 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.976488 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c" exitCode=255 Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.976570 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c"} Jan 09 10:46:13 crc kubenswrapper[4727]: I0109 10:46:13.977559 4727 scope.go:117] "RemoveContainer" containerID="23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.029861 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.071946 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.090451 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.110842 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.126986 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.145297 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.162010 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.169387 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qlpv5"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.169829 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.171732 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.171881 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.171882 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.183951 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.198081 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.216055 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.231265 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.247471 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.267087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d335f7f5-7ede-4146-9ecc-f0718b547d43-hosts-file\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.267138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgrfh\" (UniqueName: \"kubernetes.io/projected/d335f7f5-7ede-4146-9ecc-f0718b547d43-kube-api-access-bgrfh\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.269280 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.288008 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.300268 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.318206 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.329812 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.368251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d335f7f5-7ede-4146-9ecc-f0718b547d43-hosts-file\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.368297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgrfh\" (UniqueName: \"kubernetes.io/projected/d335f7f5-7ede-4146-9ecc-f0718b547d43-kube-api-access-bgrfh\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.368573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d335f7f5-7ede-4146-9ecc-f0718b547d43-hosts-file\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.396443 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgrfh\" (UniqueName: \"kubernetes.io/projected/d335f7f5-7ede-4146-9ecc-f0718b547d43-kube-api-access-bgrfh\") pod \"node-resolver-qlpv5\" (UID: \"d335f7f5-7ede-4146-9ecc-f0718b547d43\") " pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.487355 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qlpv5" Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.500436 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd335f7f5_7ede_4146_9ecc_f0718b547d43.slice/crio-661bb41ac11dc487521a26892d0eef0759fef8fe679507f98b98c552279929d2 WatchSource:0}: Error finding container 661bb41ac11dc487521a26892d0eef0759fef8fe679507f98b98c552279929d2: Status 404 returned error can't find the container with id 661bb41ac11dc487521a26892d0eef0759fef8fe679507f98b98c552279929d2 Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.551711 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hzdp7"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.552161 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.553099 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7sgfm"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.553679 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-57zpr"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.553855 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.553947 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.559072 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560070 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560233 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560342 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560725 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560782 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.560875 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.561149 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.561213 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.561332 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.562249 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.563533 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.570226 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.570325 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.570436 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.570496 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.570473476 +0000 UTC m=+22.020378257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.570815 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.570808036 +0000 UTC m=+22.020712817 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.576669 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.618193 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.649565 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.667878 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670604 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cnibin\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-kubelet\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670666 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670690 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-os-release\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670707 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-bin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670723 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-hostroot\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670746 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-multus-certs\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.670778 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.670797 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.670810 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-proxy-tls\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.670859 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.670843223 +0000 UTC m=+22.120748004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670888 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cnibin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-netns\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.670969 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2wkd\" (UniqueName: \"kubernetes.io/projected/f0230d78-c2b3-4a02-8243-6b39e8eecb90-kube-api-access-h2wkd\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-rootfs\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671078 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-daemon-config\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671173 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-system-cni-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.671189 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.671201 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.671210 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671227 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cni-binary-copy\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.671329 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.671303846 +0000 UTC m=+22.121208827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-socket-dir-parent\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-conf-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rp9j\" (UniqueName: \"kubernetes.io/projected/c3694c5b-19cf-464e-90b7-8e719d3a0d11-kube-api-access-6rp9j\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671790 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-k8s-cni-cncf-io\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671821 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-os-release\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-system-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671950 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-multus\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.671981 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-etc-kubernetes\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.672010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.672040 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktz9\" (UniqueName: \"kubernetes.io/projected/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-kube-api-access-6ktz9\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.672143 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.672159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.672181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.672198 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:16.672185141 +0000 UTC m=+22.122090122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.687400 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.703758 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.716822 4727 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717169 4727 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717216 4727 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717290 4727 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717333 4727 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717366 4727 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717402 4727 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717433 4727 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.717418 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c/status\": read tcp 38.102.83.200:45624->38.102.83.200:6443: use of closed network connection" Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717589 4727 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717677 4727 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717883 4727 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.717685 4727 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.718072 4727 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.718102 4727 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.719380 4727 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.719443 4727 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.749456 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.768957 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-netns\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772687 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-daemon-config\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772712 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2wkd\" (UniqueName: \"kubernetes.io/projected/f0230d78-c2b3-4a02-8243-6b39e8eecb90-kube-api-access-h2wkd\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772739 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-rootfs\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772760 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-system-cni-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cni-binary-copy\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772789 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-socket-dir-parent\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-conf-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772820 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772839 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772855 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772893 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rp9j\" (UniqueName: \"kubernetes.io/projected/c3694c5b-19cf-464e-90b7-8e719d3a0d11-kube-api-access-6rp9j\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772890 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-netns\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.772920 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-k8s-cni-cncf-io\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773006 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-etc-kubernetes\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773020 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-conf-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773055 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-socket-dir-parent\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-os-release\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773141 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-os-release\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773162 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-system-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773186 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-etc-kubernetes\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-multus\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773228 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-system-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773216 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773253 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ktz9\" (UniqueName: \"kubernetes.io/projected/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-kube-api-access-6ktz9\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773260 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-rootfs\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cnibin\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-kubelet\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773413 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-os-release\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773440 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-bin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773466 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-hostroot\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773496 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-multus-certs\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773530 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-multus\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773548 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-proxy-tls\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cnibin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773609 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-kubelet\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773618 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cnibin\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773586 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-os-release\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773657 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-multus-certs\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-cni-dir\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773700 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-var-lib-cni-bin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-hostroot\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cnibin\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.773770 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3694c5b-19cf-464e-90b7-8e719d3a0d11-system-cni-dir\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774124 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-mcd-auth-proxy-config\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774151 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-multus-daemon-config\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774170 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0230d78-c2b3-4a02-8243-6b39e8eecb90-host-run-k8s-cni-cncf-io\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0230d78-c2b3-4a02-8243-6b39e8eecb90-cni-binary-copy\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774377 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.774815 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3694c5b-19cf-464e-90b7-8e719d3a0d11-cni-binary-copy\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.780280 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-proxy-tls\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.787151 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.792827 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ktz9\" (UniqueName: \"kubernetes.io/projected/ea573637-1ca1-4211-8c88-9bc9fa78d6c4-kube-api-access-6ktz9\") pod \"machine-config-daemon-hzdp7\" (UID: \"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\") " pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.792996 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rp9j\" (UniqueName: \"kubernetes.io/projected/c3694c5b-19cf-464e-90b7-8e719d3a0d11-kube-api-access-6rp9j\") pod \"multus-additional-cni-plugins-7sgfm\" (UID: \"c3694c5b-19cf-464e-90b7-8e719d3a0d11\") " pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.799799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2wkd\" (UniqueName: \"kubernetes.io/projected/f0230d78-c2b3-4a02-8243-6b39e8eecb90-kube-api-access-h2wkd\") pod \"multus-57zpr\" (UID: \"f0230d78-c2b3-4a02-8243-6b39e8eecb90\") " pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.803178 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.820189 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.836728 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.852962 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.860142 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.860168 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.860305 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.860295 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.860420 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:14 crc kubenswrapper[4727]: E0109 10:46:14.860476 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.864252 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.865011 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.865788 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.866458 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.867175 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.867727 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.868320 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.869999 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.870008 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.871128 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.872079 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.872669 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.873686 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.875636 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.876548 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.877572 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.878208 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.879208 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.880190 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.880189 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.882223 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.884717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.884779 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.885481 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.902365 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.902945 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.903644 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-57zpr" Jan 09 10:46:14 crc kubenswrapper[4727]: W0109 10:46:14.910362 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3694c5b_19cf_464e_90b7_8e719d3a0d11.slice/crio-eb261399a6a80393b3278e2fa90775fee94414fd45fff45312fb01f3aff6a795 WatchSource:0}: Error finding container eb261399a6a80393b3278e2fa90775fee94414fd45fff45312fb01f3aff6a795: Status 404 returned error can't find the container with id eb261399a6a80393b3278e2fa90775fee94414fd45fff45312fb01f3aff6a795 Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.910457 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.911371 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.914682 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.916082 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.916667 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.916994 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.918050 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.918520 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.922895 4727 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.923002 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.924637 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.926073 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.936424 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.943982 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.958693 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.967729 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.968593 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.969721 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.970386 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.974536 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.975361 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.976369 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.977037 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.977891 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.978410 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.979254 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.979975 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.981990 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.982526 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.983892 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.994457 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.995146 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.996113 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.996527 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.996982 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngngm"] Jan 09 10:46:14 crc kubenswrapper[4727]: I0109 10:46:14.997832 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.000199 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"db1d5b9079c5ef9d075d8b48f59a077f78b84a728a96d7c81b25ddf23e3d0652"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.001164 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerStarted","Data":"eb261399a6a80393b3278e2fa90775fee94414fd45fff45312fb01f3aff6a795"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006044 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006445 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006607 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006745 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.006967 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.007821 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.007821 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.014698 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qlpv5" event={"ID":"d335f7f5-7ede-4146-9ecc-f0718b547d43","Type":"ContainerStarted","Data":"95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.014744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qlpv5" event={"ID":"d335f7f5-7ede-4146-9ecc-f0718b547d43","Type":"ContainerStarted","Data":"661bb41ac11dc487521a26892d0eef0759fef8fe679507f98b98c552279929d2"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.030791 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.042351 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.051056 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.051366 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.053941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"81c1c51202da312ce03669d5c060485af0c383cec8c55724177a3bab0a529fb9"} Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.055580 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.075158 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083609 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083638 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083674 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083699 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083721 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083762 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083786 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083818 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083890 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083911 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083931 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083955 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.083991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.084037 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.084062 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.084084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.084106 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.104046 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.145787 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.167735 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187070 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187098 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187152 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187166 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187255 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187259 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187321 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187326 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187647 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187670 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187711 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187739 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187688 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187768 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187809 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187862 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187912 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187966 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.187998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188173 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188361 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.188819 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.189404 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.189455 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.191166 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.199635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.209345 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") pod \"ovnkube-node-ngngm\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.210925 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.226573 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.249798 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.266537 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.284231 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.299872 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.316856 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.331684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.344772 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.344928 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:15 crc kubenswrapper[4727]: W0109 10:46:15.356867 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33bb3d7e_6f5b_4a7b_b2c7_b04fb8e20e40.slice/crio-597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc WatchSource:0}: Error finding container 597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc: Status 404 returned error can't find the container with id 597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.382689 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.397574 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.413639 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.431340 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.451615 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.469183 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.482008 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.501790 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.515127 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.525382 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.528035 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.530225 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.543242 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.553408 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.556076 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.569366 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.603771 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.614426 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.662983 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.727171 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.747987 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.814623 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.875601 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 09 10:46:15 crc kubenswrapper[4727]: I0109 10:46:15.921663 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.042913 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.065728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.065799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.067648 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" exitCode=0 Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.067747 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.067853 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.070598 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.075726 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.078476 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1" exitCode=0 Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.079673 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.085946 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.086267 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.094858 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.113861 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.127441 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.144726 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.167591 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.184390 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.184480 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.198893 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.212352 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.226825 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.236345 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.238189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.238242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.238254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.238693 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.243091 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.257151 4727 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.257447 4727 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.258519 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.259131 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.278084 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.278968 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.282581 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.295803 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.296362 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.304965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.305090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.305106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.305134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.305157 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.324934 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.327587 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.329232 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.345590 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.351581 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.357926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.369467 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.374771 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.374894 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.377596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.383864 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.398646 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.411091 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.443104 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.483111 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.499432 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.531113 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.549130 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hg5sh"] Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.549679 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.553004 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.553568 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.554073 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.554205 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.556307 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.570699 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.582415 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.585659 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.597818 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.606316 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.606560 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.60648971 +0000 UTC m=+26.056394491 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.606720 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.606880 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.606939 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.606932302 +0000 UTC m=+26.056837083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.612324 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.644896 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.686246 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.687947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.687982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.687995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.688013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.688024 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707165 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32de8b71-676d-47ed-a5e4-48737247937e-serviceca\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xgjj\" (UniqueName: \"kubernetes.io/projected/32de8b71-676d-47ed-a5e4-48737247937e-kube-api-access-4xgjj\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707560 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707630 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32de8b71-676d-47ed-a5e4-48737247937e-host\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.707668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707810 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707835 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707849 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707906 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.707885215 +0000 UTC m=+26.157789996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707938 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707957 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707969 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.707979 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.708013 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.708000839 +0000 UTC m=+26.157905620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.708166 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:20.708109702 +0000 UTC m=+26.158014483 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.722189 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.765437 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.790123 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.803992 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.808446 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32de8b71-676d-47ed-a5e4-48737247937e-serviceca\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.808558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xgjj\" (UniqueName: \"kubernetes.io/projected/32de8b71-676d-47ed-a5e4-48737247937e-kube-api-access-4xgjj\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.808624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32de8b71-676d-47ed-a5e4-48737247937e-host\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.808720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32de8b71-676d-47ed-a5e4-48737247937e-host\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.809655 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32de8b71-676d-47ed-a5e4-48737247937e-serviceca\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.853922 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xgjj\" (UniqueName: \"kubernetes.io/projected/32de8b71-676d-47ed-a5e4-48737247937e-kube-api-access-4xgjj\") pod \"node-ca-hg5sh\" (UID: \"32de8b71-676d-47ed-a5e4-48737247937e\") " pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.859889 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.859974 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.859920 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.860088 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.860122 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.860216 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:16 crc kubenswrapper[4727]: E0109 10:46:16.860287 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.892910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.892960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.892971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.893012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.893024 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:16Z","lastTransitionTime":"2026-01-09T10:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.903438 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.931047 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hg5sh" Jan 09 10:46:16 crc kubenswrapper[4727]: W0109 10:46:16.942963 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32de8b71_676d_47ed_a5e4_48737247937e.slice/crio-8b2814bda81a798ed6f66abb9f39fb5b99d343cb6ce35b184d963ef57b71bf3c WatchSource:0}: Error finding container 8b2814bda81a798ed6f66abb9f39fb5b99d343cb6ce35b184d963ef57b71bf3c: Status 404 returned error can't find the container with id 8b2814bda81a798ed6f66abb9f39fb5b99d343cb6ce35b184d963ef57b71bf3c Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.945416 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:16 crc kubenswrapper[4727]: I0109 10:46:16.988530 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:16Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.006951 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.023878 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.062584 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.084039 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hg5sh" event={"ID":"32de8b71-676d-47ed-a5e4-48737247937e","Type":"ContainerStarted","Data":"8b2814bda81a798ed6f66abb9f39fb5b99d343cb6ce35b184d963ef57b71bf3c"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088655 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088682 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088693 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088702 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.088723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.092745 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad" exitCode=0 Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.093888 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.110604 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.112381 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.145131 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.183905 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.213977 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.214042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.214054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.214069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.214082 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.226900 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.264886 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.303025 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316854 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.316866 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.345607 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.383316 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.418950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.419006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.419019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.419037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.419333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.425438 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.463890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.505186 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.522492 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.540807 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.584653 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.622014 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.624962 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.680305 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.705367 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.729245 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.744447 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:17Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.833622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.834200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.834212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.834238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.834251 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:17 crc kubenswrapper[4727]: I0109 10:46:17.937588 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:17Z","lastTransitionTime":"2026-01-09T10:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.041844 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.098362 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hg5sh" event={"ID":"32de8b71-676d-47ed-a5e4-48737247937e","Type":"ContainerStarted","Data":"a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.101452 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04" exitCode=0 Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.101533 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.116770 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.134668 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149462 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.149493 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.150998 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.167304 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.182582 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.195419 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.212361 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.229669 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.242749 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.251764 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.259151 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.273783 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.291765 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.303263 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.323650 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.345227 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.354974 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.383624 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.422376 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.458145 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.465423 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.512557 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.548883 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.561994 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.585347 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.630579 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665208 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.665485 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.704300 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.746165 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.768975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.769040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.769057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.769083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.769100 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.785962 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.824925 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.859601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.859760 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.859649 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:18 crc kubenswrapper[4727]: E0109 10:46:18.859901 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:18 crc kubenswrapper[4727]: E0109 10:46:18.860017 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:18 crc kubenswrapper[4727]: E0109 10:46:18.860130 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.863445 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.871364 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:18 crc kubenswrapper[4727]: I0109 10:46:18.975428 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:18Z","lastTransitionTime":"2026-01-09T10:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.078305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.078732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.078856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.078955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.079028 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.107748 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.111314 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8" exitCode=0 Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.111366 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.130352 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.152639 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.168265 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.180673 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.181567 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.196306 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.207086 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.228044 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.243478 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.258914 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.274542 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.284688 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.306170 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.343689 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.386152 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.388433 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.428377 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.491266 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.595153 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.697955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.698009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.698022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.698042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.698053 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.802963 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.913981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.914041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.914054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.914079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:19 crc kubenswrapper[4727]: I0109 10:46:19.914094 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:19Z","lastTransitionTime":"2026-01-09T10:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.017499 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.119209 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.120236 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37" exitCode=0 Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.120302 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.137141 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.157573 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.170373 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.188409 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.204026 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.217876 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.221936 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.228470 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.248658 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.269703 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.286971 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.303675 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.319651 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.324220 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.336307 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.351439 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.427395 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.529786 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.632737 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.653389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.653700 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.653661098 +0000 UTC m=+34.103565879 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.653770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.653973 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.654059 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.654040479 +0000 UTC m=+34.103945260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.735682 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.754580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.754641 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.754686 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754786 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754863 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754895 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754920 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.754897 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.754867778 +0000 UTC m=+34.204772749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755010 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.754981961 +0000 UTC m=+34.204886922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755158 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755213 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755227 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.755310 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.755285391 +0000 UTC m=+34.205190172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.847980 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.859375 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.859487 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.859574 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.859627 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.859711 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:20 crc kubenswrapper[4727]: E0109 10:46:20.859803 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.951619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.951943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.952071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.952227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:20 crc kubenswrapper[4727]: I0109 10:46:20.952347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:20Z","lastTransitionTime":"2026-01-09T10:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.056391 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.132087 4727 generic.go:334] "Generic (PLEG): container finished" podID="c3694c5b-19cf-464e-90b7-8e719d3a0d11" containerID="616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d" exitCode=0 Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.132612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerDied","Data":"616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.158370 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.164720 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.177612 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.194895 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.216674 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.234775 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.250235 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.267939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268060 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.268661 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.287131 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.300687 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.318195 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.331767 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.347450 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.358583 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.370934 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.375084 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:21Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.474107 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.577273 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680349 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.680429 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.783169 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885962 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.885976 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989681 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:21 crc kubenswrapper[4727]: I0109 10:46:21.989694 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:21Z","lastTransitionTime":"2026-01-09T10:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.092751 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.140264 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.141956 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.141995 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.155256 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" event={"ID":"c3694c5b-19cf-464e-90b7-8e719d3a0d11","Type":"ContainerStarted","Data":"8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.160244 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.168533 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.172598 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.173228 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.188631 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195525 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.195620 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.205677 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.225079 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.240748 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.254271 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.268041 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.285355 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.297707 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.298781 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.316462 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.330778 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.344983 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.361095 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.377152 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.392291 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.401448 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.405480 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.419366 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.434862 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.454391 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.473541 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.484618 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.498629 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.503813 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.515138 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.530003 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.543046 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.564533 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.581297 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:22Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.606610 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.709956 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.812547 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.859881 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.859977 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:22 crc kubenswrapper[4727]: E0109 10:46:22.860038 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.860146 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:22 crc kubenswrapper[4727]: E0109 10:46:22.860352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:22 crc kubenswrapper[4727]: E0109 10:46:22.860396 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.916986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.917041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.917057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.917215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:22 crc kubenswrapper[4727]: I0109 10:46:22.917234 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:22Z","lastTransitionTime":"2026-01-09T10:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.020996 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124080 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.124143 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.158196 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.226562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.329400 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.431907 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.534959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.535018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.535036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.535062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.535080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.638399 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.745582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.848883 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:23 crc kubenswrapper[4727]: I0109 10:46:23.951292 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:23Z","lastTransitionTime":"2026-01-09T10:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.054318 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.157192 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.163645 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/0.log" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.167573 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a" exitCode=1 Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.167630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.168399 4727 scope.go:117] "RemoveContainer" containerID="cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.185222 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.199606 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.214150 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.228533 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.249349 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.260931 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.266218 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.282241 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.296415 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.311884 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.323673 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.342424 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.355376 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.363899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.393173 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.411963 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.467901 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.571135 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.673427 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.775615 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.859567 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.859668 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:24 crc kubenswrapper[4727]: E0109 10:46:24.859777 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.859800 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:24 crc kubenswrapper[4727]: E0109 10:46:24.859884 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:24 crc kubenswrapper[4727]: E0109 10:46:24.860062 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.874609 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.877684 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.886437 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.906178 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.920960 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.934954 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.954111 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.968674 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.980347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:24Z","lastTransitionTime":"2026-01-09T10:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.982190 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:24 crc kubenswrapper[4727]: I0109 10:46:24.999207 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.014176 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.031256 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.051083 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.065795 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.078367 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083340 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.083417 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.174351 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/0.log" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.178365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.178484 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.186293 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.194610 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.209617 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.234478 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.251349 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.265666 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.282557 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.288914 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.298699 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.313007 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.330140 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.346007 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.366989 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.382159 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.391671 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.397784 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.407026 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.494584 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.596945 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.699693 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802171 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.802216 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:25 crc kubenswrapper[4727]: I0109 10:46:25.905534 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:25Z","lastTransitionTime":"2026-01-09T10:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.007904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.008104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.008191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.008274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.008304 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.111831 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.185052 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/1.log" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.185962 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/0.log" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.189474 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" exitCode=1 Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.189537 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.189607 4727 scope.go:117] "RemoveContainer" containerID="cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.190310 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.190494 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.208197 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.214847 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.223503 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.228235 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.247977 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.268021 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.283928 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.297249 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.313732 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318108 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.318122 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.329468 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.342905 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.356270 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.372919 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.388837 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.403725 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420462 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.420480 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.434155 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.450964 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.469572 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.486624 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.503047 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.517853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.522962 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.523013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.523022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.523043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.523056 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.530463 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.551429 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.565329 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.578994 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.599014 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfd397aab9e3ea77e5ad837d3aa55a52304ad9834d467ae4a3d49ef9453b9d7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:23Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:23Z is after 2025-08-24T17:21:41Z]\\\\nI0109 10:46:23.367362 6031 services_controller.go:443] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.169\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0109 10:46:23.367373 6031 services_controller.go:444] Built service openshift-kube-scheduler/scheduler LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367380 6031 services_controller.go:445] Built service openshift-kube-scheduler/scheduler LB template configs for network=default: []services.lbConfig(nil)\\\\nI0109 10:46:23.367392 6031 services_controller.go:451] Built service openshift-kube-sc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.615054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.626237 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.632782 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.647565 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.661864 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.718889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.718946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.718961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.718986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.719001 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.735775 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.740952 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.756622 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.760539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.760753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.760886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.761040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.761174 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.778268 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.782888 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.800174 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.803987 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.817775 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:26Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.817932 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.819573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.859587 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.859604 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.859750 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.859924 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.860049 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:26 crc kubenswrapper[4727]: E0109 10:46:26.860174 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:26 crc kubenswrapper[4727]: I0109 10:46:26.922116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:26Z","lastTransitionTime":"2026-01-09T10:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.025672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.128775 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.195797 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/1.log" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.200142 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:27 crc kubenswrapper[4727]: E0109 10:46:27.200337 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.216104 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.228422 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236439 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.236596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.261835 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.279851 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.296250 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.315962 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.330583 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.343141 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.348112 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.368858 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.384269 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.399949 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.415296 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.423057 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg"] Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.423772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.426723 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.426942 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.430417 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.440980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.441050 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.441078 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9l5r\" (UniqueName: \"kubernetes.io/projected/50be6d5b-675b-4837-ba20-6d6c75a363d6-kube-api-access-r9l5r\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.441153 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.443867 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.445956 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.445997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.446010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.446027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.446039 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.458079 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.471771 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.486047 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.498133 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.515109 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.532328 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.542730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.542810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.542843 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.542868 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9l5r\" (UniqueName: \"kubernetes.io/projected/50be6d5b-675b-4837-ba20-6d6c75a363d6-kube-api-access-r9l5r\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.543621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.543671 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50be6d5b-675b-4837-ba20-6d6c75a363d6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548670 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.548796 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.550395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50be6d5b-675b-4837-ba20-6d6c75a363d6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.551952 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.563117 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9l5r\" (UniqueName: \"kubernetes.io/projected/50be6d5b-675b-4837-ba20-6d6c75a363d6-kube-api-access-r9l5r\") pod \"ovnkube-control-plane-749d76644c-h9pvg\" (UID: \"50be6d5b-675b-4837-ba20-6d6c75a363d6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.566458 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.580836 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.595853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.611398 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.626756 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.642280 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.651739 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.659272 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.691282 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:27Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.741440 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.754617 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: W0109 10:46:27.757449 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50be6d5b_675b_4837_ba20_6d6c75a363d6.slice/crio-bc8542781eb6025bb079a0fabe93937dd0fa5a0a335b6ebab0e1e9518bafa5f1 WatchSource:0}: Error finding container bc8542781eb6025bb079a0fabe93937dd0fa5a0a335b6ebab0e1e9518bafa5f1: Status 404 returned error can't find the container with id bc8542781eb6025bb079a0fabe93937dd0fa5a0a335b6ebab0e1e9518bafa5f1 Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.857964 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:27 crc kubenswrapper[4727]: I0109 10:46:27.960413 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:27Z","lastTransitionTime":"2026-01-09T10:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.062913 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.153897 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vhsj4"] Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.154377 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.154438 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.166287 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.167057 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.182149 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.193405 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.203562 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" event={"ID":"50be6d5b-675b-4837-ba20-6d6c75a363d6","Type":"ContainerStarted","Data":"28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.203608 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" event={"ID":"50be6d5b-675b-4837-ba20-6d6c75a363d6","Type":"ContainerStarted","Data":"6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.203621 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" event={"ID":"50be6d5b-675b-4837-ba20-6d6c75a363d6","Type":"ContainerStarted","Data":"bc8542781eb6025bb079a0fabe93937dd0fa5a0a335b6ebab0e1e9518bafa5f1"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.207279 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.224928 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.239407 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.249750 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.250039 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkzz\" (UniqueName: \"kubernetes.io/projected/6a29665a-01da-4439-b13d-3950bf573044-kube-api-access-8mkzz\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.254932 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.270688 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.272268 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.291893 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.308291 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.321914 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.343888 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.350775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mkzz\" (UniqueName: \"kubernetes.io/projected/6a29665a-01da-4439-b13d-3950bf573044-kube-api-access-8mkzz\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.350842 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.350968 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.351023 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:28.851004137 +0000 UTC m=+34.300908918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.358238 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mkzz\" (UniqueName: \"kubernetes.io/projected/6a29665a-01da-4439-b13d-3950bf573044-kube-api-access-8mkzz\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372854 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.372863 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.377947 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.391321 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.402597 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.418385 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.430393 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.443161 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.458482 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.471764 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475748 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.475866 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.484772 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.497451 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.512163 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.526300 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.538703 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.549762 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.565844 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.579325 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.584209 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.595801 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.606008 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.614727 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682490 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.682601 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.753283 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.753390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.753502 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.753563 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.753550793 +0000 UTC m=+50.203455574 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.753607 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.753602484 +0000 UTC m=+50.203507265 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.785736 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.853990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.854047 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.854072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.854099 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854208 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854262 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854216 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854326 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.8542933 +0000 UTC m=+50.304198121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854341 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854371 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854253 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854435 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:29.854415383 +0000 UTC m=+35.304320264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854302 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854450 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.854443524 +0000 UTC m=+50.304348425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854459 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.854575 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:44.854492515 +0000 UTC m=+50.304397346 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.860011 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.860090 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.860007 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.860211 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.860412 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:28 crc kubenswrapper[4727]: E0109 10:46:28.860558 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.888223 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:28 crc kubenswrapper[4727]: I0109 10:46:28.990791 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:28Z","lastTransitionTime":"2026-01-09T10:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.093963 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.196677 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.298906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.298978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.299000 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.299031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.299054 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.401973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.402069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.402086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.402108 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.402128 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.505872 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.608856 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.712582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815150 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.815226 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.859987 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:29 crc kubenswrapper[4727]: E0109 10:46:29.860140 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.865865 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:29 crc kubenswrapper[4727]: E0109 10:46:29.866133 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:29 crc kubenswrapper[4727]: E0109 10:46:29.866261 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:31.866223618 +0000 UTC m=+37.316128579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.918929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.918973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.918983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.919003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:29 crc kubenswrapper[4727]: I0109 10:46:29.919014 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:29Z","lastTransitionTime":"2026-01-09T10:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.021877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.022174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.022312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.022412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.022542 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.125530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.125876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.125975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.126078 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.126188 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.228910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.228985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.228998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.229021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.229036 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.332669 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.435683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.537997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.538405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.538636 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.538909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.539186 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.641930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.642011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.642035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.642065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.642088 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.745799 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848432 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.848562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.860291 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.860305 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:30 crc kubenswrapper[4727]: E0109 10:46:30.860388 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.860426 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:30 crc kubenswrapper[4727]: E0109 10:46:30.860637 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:30 crc kubenswrapper[4727]: E0109 10:46:30.860718 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:30 crc kubenswrapper[4727]: I0109 10:46:30.951490 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:30Z","lastTransitionTime":"2026-01-09T10:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.054716 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.158127 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.261803 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.365314 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.468236 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571681 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.571734 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.674665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.777941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.777985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.777998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.778015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.778027 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.859294 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:31 crc kubenswrapper[4727]: E0109 10:46:31.859559 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.881178 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.885494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:31 crc kubenswrapper[4727]: E0109 10:46:31.885696 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:31 crc kubenswrapper[4727]: E0109 10:46:31.885782 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:35.885759763 +0000 UTC m=+41.335664574 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:31 crc kubenswrapper[4727]: I0109 10:46:31.984392 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:31Z","lastTransitionTime":"2026-01-09T10:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.087809 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.190907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.190965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.190983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.191007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.191026 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.294435 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397342 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.397459 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.499926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.602554 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.705807 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.808277 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.859898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.860062 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.860105 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:32 crc kubenswrapper[4727]: E0109 10:46:32.860246 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:32 crc kubenswrapper[4727]: E0109 10:46:32.860796 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:32 crc kubenswrapper[4727]: E0109 10:46:32.861003 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911150 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:32 crc kubenswrapper[4727]: I0109 10:46:32.911199 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:32Z","lastTransitionTime":"2026-01-09T10:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.014683 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117697 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.117772 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.220292 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.322841 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.426776 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530113 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.530183 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.632843 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.735979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.736031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.736042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.736058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.736070 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.838890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.838953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.838971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.838994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.839012 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.860251 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:33 crc kubenswrapper[4727]: E0109 10:46:33.860403 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:33 crc kubenswrapper[4727]: I0109 10:46:33.942754 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:33Z","lastTransitionTime":"2026-01-09T10:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.045352 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.148596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.251234 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354125 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.354171 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.456932 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.560303 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.665685 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.767899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.860368 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.860437 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.860636 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:34 crc kubenswrapper[4727]: E0109 10:46:34.860625 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:34 crc kubenswrapper[4727]: E0109 10:46:34.860697 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:34 crc kubenswrapper[4727]: E0109 10:46:34.860880 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.870185 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.892857 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.912798 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.925202 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.940899 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.963890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.973214 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:34Z","lastTransitionTime":"2026-01-09T10:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.977414 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:34 crc kubenswrapper[4727]: I0109 10:46:34.991472 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.008862 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.027702 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.045072 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.063339 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.075376 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.076599 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.096010 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.100577 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.101500 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:35 crc kubenswrapper[4727]: E0109 10:46:35.101750 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.111584 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.125701 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.140731 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.178143 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.282400 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.384550 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487820 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.487833 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.591282 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.694346 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.797760 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.859650 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:35 crc kubenswrapper[4727]: E0109 10:46:35.859795 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.901205 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:35Z","lastTransitionTime":"2026-01-09T10:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:35 crc kubenswrapper[4727]: I0109 10:46:35.932380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:35 crc kubenswrapper[4727]: E0109 10:46:35.932594 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:35 crc kubenswrapper[4727]: E0109 10:46:35.932689 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:46:43.932663563 +0000 UTC m=+49.382568384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004693 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.004742 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.107243 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.210500 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.313762 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.416618 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.520226 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.622637 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725679 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.725690 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.829385 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.860273 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.860346 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.860277 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:36 crc kubenswrapper[4727]: E0109 10:46:36.860449 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:36 crc kubenswrapper[4727]: E0109 10:46:36.860556 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:36 crc kubenswrapper[4727]: E0109 10:46:36.860732 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.933242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.933664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.933896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.934130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.934321 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974326 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:36 crc kubenswrapper[4727]: I0109 10:46:36.974342 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:36Z","lastTransitionTime":"2026-01-09T10:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:36.999888 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:36Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005661 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.005717 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.020981 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:37Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025472 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.025578 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.044156 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:37Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.049497 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.067224 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:37Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.071884 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.088954 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:37Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.089185 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.091473 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.194627 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.297891 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.401345 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504414 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.504465 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.607898 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.608021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.608041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.608063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.608080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.711859 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.814402 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.859448 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:37 crc kubenswrapper[4727]: E0109 10:46:37.859782 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:37 crc kubenswrapper[4727]: I0109 10:46:37.919364 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:37Z","lastTransitionTime":"2026-01-09T10:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.022175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.125435 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228097 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.228189 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330748 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.330758 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.433601 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.536982 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.639284 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.747809 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.851324 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.859891 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.860102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.860326 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:38 crc kubenswrapper[4727]: E0109 10:46:38.860334 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:38 crc kubenswrapper[4727]: E0109 10:46:38.860538 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:38 crc kubenswrapper[4727]: E0109 10:46:38.860669 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:38 crc kubenswrapper[4727]: I0109 10:46:38.954429 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:38Z","lastTransitionTime":"2026-01-09T10:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057697 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.057750 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.161347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.263961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.264005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.264016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.264037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.264048 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.366985 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.469980 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.572812 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.676102 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.778959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.779001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.779010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.779028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.779040 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.859931 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:39 crc kubenswrapper[4727]: E0109 10:46:39.860146 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.880965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:39 crc kubenswrapper[4727]: I0109 10:46:39.983280 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:39Z","lastTransitionTime":"2026-01-09T10:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086544 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.086569 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.189694 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.292753 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.396205 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.499350 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.602785 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.705798 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.807945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.807989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.807997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.808012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.808023 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.859575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.859677 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:40 crc kubenswrapper[4727]: E0109 10:46:40.859755 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.859780 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:40 crc kubenswrapper[4727]: E0109 10:46:40.859952 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:40 crc kubenswrapper[4727]: E0109 10:46:40.860034 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:40 crc kubenswrapper[4727]: I0109 10:46:40.911820 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:40Z","lastTransitionTime":"2026-01-09T10:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.018338 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122000 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122051 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.122087 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.225246 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.328863 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.432959 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536651 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.536669 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.639794 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.742836 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.847966 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.848017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.848029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.848048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.848063 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.860301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:41 crc kubenswrapper[4727]: E0109 10:46:41.860579 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.950912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.950981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.950995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.951011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:41 crc kubenswrapper[4727]: I0109 10:46:41.951021 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:41Z","lastTransitionTime":"2026-01-09T10:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054531 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.054572 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.157368 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.260673 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.364604 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467636 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.467677 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.570773 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.674196 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777308 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.777449 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.860104 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.860184 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.860258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:42 crc kubenswrapper[4727]: E0109 10:46:42.860404 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:42 crc kubenswrapper[4727]: E0109 10:46:42.860576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:42 crc kubenswrapper[4727]: E0109 10:46:42.860842 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.880284 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.982850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.982923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.982990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.983016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:42 crc kubenswrapper[4727]: I0109 10:46:42.983033 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:42Z","lastTransitionTime":"2026-01-09T10:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.086965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.190771 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.294528 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398080 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.398116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.502436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.606785 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710636 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.710702 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.813436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.860074 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:43 crc kubenswrapper[4727]: E0109 10:46:43.860312 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:43 crc kubenswrapper[4727]: I0109 10:46:43.916846 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:43Z","lastTransitionTime":"2026-01-09T10:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.019936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.019987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.019998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.020019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.020031 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.027660 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.027849 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.027954 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:00.027921215 +0000 UTC m=+65.477826196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.123135 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.226161 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.329952 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.422610 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.432761 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.436966 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.437204 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.459882 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.475268 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.490230 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.507053 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.520326 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.536731 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.537106 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.553007 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.570418 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.588198 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.602066 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.615155 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.631540 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.639719 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.643450 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.658045 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.673904 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.742782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.835169 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.835378 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.835348824 +0000 UTC m=+82.285253605 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.835431 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.835632 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.835685 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.835677023 +0000 UTC m=+82.285581804 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.845758 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.860128 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.860128 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.860254 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.860337 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.860261 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.860538 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.881393 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.894283 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.915825 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.929758 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.936238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.936299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.936331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936487 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936530 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936544 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936594 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936609 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.936589925 +0000 UTC m=+82.386494696 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936864 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936929 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936958 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.936961 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.936932614 +0000 UTC m=+82.386837545 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:46:44 crc kubenswrapper[4727]: E0109 10:46:44.937052 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:16.937031478 +0000 UTC m=+82.386936429 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.948146 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:44Z","lastTransitionTime":"2026-01-09T10:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.951896 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.970128 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:44 crc kubenswrapper[4727]: I0109 10:46:44.986096 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:44Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.003751 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.019981 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.030250 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.084557 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.087494 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.101995 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.115086 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.128679 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.144853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.159526 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.173355 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:45Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.187775 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.290755 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.395273 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.498690 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602532 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.602650 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.705912 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.808406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.859755 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:45 crc kubenswrapper[4727]: E0109 10:46:45.859951 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:45 crc kubenswrapper[4727]: I0109 10:46:45.911589 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:45Z","lastTransitionTime":"2026-01-09T10:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.014121 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116768 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.116840 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.220213 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.323531 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.426186 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.529218 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.632222 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.733993 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.734037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.734046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.734060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.734070 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.836835 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.859691 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.859755 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:46 crc kubenswrapper[4727]: E0109 10:46:46.859819 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:46 crc kubenswrapper[4727]: E0109 10:46:46.859904 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.859982 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:46 crc kubenswrapper[4727]: E0109 10:46:46.860226 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:46 crc kubenswrapper[4727]: I0109 10:46:46.939300 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:46Z","lastTransitionTime":"2026-01-09T10:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.041981 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.145440 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.248990 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.272829 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.287654 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294053 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294225 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.294247 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.312250 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.317985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.318089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.318123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.318163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.318189 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.339322 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.344855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.344929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.344948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.344981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.345002 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.359339 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.363991 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.382570 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:47Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.382777 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.384990 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.488698 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.591600 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694661 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.694819 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.797791 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.859734 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:47 crc kubenswrapper[4727]: E0109 10:46:47.860215 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:47 crc kubenswrapper[4727]: I0109 10:46:47.901655 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:47Z","lastTransitionTime":"2026-01-09T10:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004094 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.004142 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.106934 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.209598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.210044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.210124 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.210205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.210268 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.313313 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416874 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.416905 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.519945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.520009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.520024 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.520059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.520071 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.622979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.623410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.623527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.623649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.623754 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.726494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.726824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.726911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.727014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.727138 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.830232 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.859887 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.859921 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:48 crc kubenswrapper[4727]: E0109 10:46:48.860163 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.860570 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:48 crc kubenswrapper[4727]: E0109 10:46:48.860949 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:48 crc kubenswrapper[4727]: E0109 10:46:48.861141 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.861703 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:48 crc kubenswrapper[4727]: I0109 10:46:48.932763 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:48Z","lastTransitionTime":"2026-01-09T10:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.035350 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.138349 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240977 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.240988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.282915 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/1.log" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.285834 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.286362 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.312194 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.332446 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.344179 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.350622 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.363038 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.378083 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.403609 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.426791 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.446005 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447326 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.447340 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.464851 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.485572 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.500218 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.514904 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.530774 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.546258 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.550948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.550989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.550999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.551017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.551028 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.564890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.583992 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.606285 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:49Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653697 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.653735 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.757579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.758158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.758175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.758195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.758206 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.859209 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:49 crc kubenswrapper[4727]: E0109 10:46:49.859400 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.860940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.860988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.860999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.861020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.861032 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963660 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:49 crc kubenswrapper[4727]: I0109 10:46:49.963753 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:49Z","lastTransitionTime":"2026-01-09T10:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.066544 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169472 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.169484 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.272809 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.293045 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/2.log" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.293611 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/1.log" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.296741 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" exitCode=1 Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.296790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.296839 4727 scope.go:117] "RemoveContainer" containerID="de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.298004 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:46:50 crc kubenswrapper[4727]: E0109 10:46:50.298296 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.318718 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.335601 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.350877 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.366230 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.375705 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.381089 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.395216 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.415712 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.433290 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.449835 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.463654 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.478995 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.479239 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.491885 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.505722 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.529050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.541765 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.565537 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de13009fe1d9658e7ef8c7d800a08cd6743700ea7943e4cbad166306ada25801\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:25Z\\\",\\\"message\\\":\\\"hift-service-ca-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-service-ca-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:46:25.019054 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0109 10:46:25.019058 6161 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019069 6161 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0109 10:46:25.019078 6161 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0109 10:46:25.018933 6161 services_controller.go:445] Built\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582179 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:50Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.582935 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.685944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.685988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.685999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.686017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.686030 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.788538 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.859359 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.859387 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.859425 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:50 crc kubenswrapper[4727]: E0109 10:46:50.859607 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:50 crc kubenswrapper[4727]: E0109 10:46:50.859825 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:50 crc kubenswrapper[4727]: E0109 10:46:50.860034 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891914 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.891926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994976 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:50 crc kubenswrapper[4727]: I0109 10:46:50.994987 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:50Z","lastTransitionTime":"2026-01-09T10:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.098536 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.201816 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.301462 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/2.log" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.303726 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.306086 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:46:51 crc kubenswrapper[4727]: E0109 10:46:51.306247 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.319955 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.333063 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.346658 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.359249 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.373657 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.386574 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.397846 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.406966 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.412643 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.425703 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.439371 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.451995 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.465271 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.476846 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.487457 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.499309 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.509982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.510026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.510039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.510060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.510075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.511991 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.532018 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:51Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.617453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.617855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.617889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.618332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.618356 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721124 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.721160 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.824918 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.860258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:51 crc kubenswrapper[4727]: E0109 10:46:51.860485 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:51 crc kubenswrapper[4727]: I0109 10:46:51.928916 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:51Z","lastTransitionTime":"2026-01-09T10:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.032848 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.136875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.136955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.136990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.137020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.137042 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.239930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.342997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.343071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.343115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.343148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.343169 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.446886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.446947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.446964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.446987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.447007 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.549856 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.652347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.755612 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.858801 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.859900 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:52 crc kubenswrapper[4727]: E0109 10:46:52.860094 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.860414 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:52 crc kubenswrapper[4727]: E0109 10:46:52.860562 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.860734 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:52 crc kubenswrapper[4727]: E0109 10:46:52.860862 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:52 crc kubenswrapper[4727]: I0109 10:46:52.961459 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:52Z","lastTransitionTime":"2026-01-09T10:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.063899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.063969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.063980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.063998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.064010 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.166753 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.270488 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373844 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.373857 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.477721 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.581153 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.684621 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.787940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.788005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.788016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.788032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.788064 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.859890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:53 crc kubenswrapper[4727]: E0109 10:46:53.860101 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.890682 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.993955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.994021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.994039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.994081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:53 crc kubenswrapper[4727]: I0109 10:46:53.994103 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:53Z","lastTransitionTime":"2026-01-09T10:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.098320 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.201741 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.304933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.304972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.304983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.304998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.305009 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.408638 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.511692 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.614407 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.717870 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.820333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.859720 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.859835 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:54 crc kubenswrapper[4727]: E0109 10:46:54.859938 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:54 crc kubenswrapper[4727]: E0109 10:46:54.860075 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.860218 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:54 crc kubenswrapper[4727]: E0109 10:46:54.860300 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.875378 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.889486 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.906857 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.920744 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.923156 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:54Z","lastTransitionTime":"2026-01-09T10:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.938728 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.959274 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.974985 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:54 crc kubenswrapper[4727]: I0109 10:46:54.991504 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:54Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.007224 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.022347 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.025670 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.037345 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.054721 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.067735 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.080252 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.093752 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.104738 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.124577 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:55Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.137970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.138030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.138046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.138068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.138088 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.241689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.241750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.241761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.241783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.242185 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.345379 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.448847 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.551891 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.654850 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.757760 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.859392 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:55 crc kubenswrapper[4727]: E0109 10:46:55.859568 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.860868 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963959 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:55 crc kubenswrapper[4727]: I0109 10:46:55.963968 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:55Z","lastTransitionTime":"2026-01-09T10:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.066947 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.169818 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.272176 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374962 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.374989 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.477776 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.580696 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.683371 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.786591 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.860057 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.860392 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.860289 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:56 crc kubenswrapper[4727]: E0109 10:46:56.860826 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:56 crc kubenswrapper[4727]: E0109 10:46:56.861041 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:56 crc kubenswrapper[4727]: E0109 10:46:56.861170 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.891552 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:56 crc kubenswrapper[4727]: I0109 10:46:56.995293 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:56Z","lastTransitionTime":"2026-01-09T10:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.098145 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.200679 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.303900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.303953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.303971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.303994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.304015 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.407082 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.510707 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.614817 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.671996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.672027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.672037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.672051 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.672061 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.684856 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.689892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.690040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.690111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.690213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.690292 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.704570 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.710487 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.723316 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.727937 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.742539 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.746929 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.746989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.747007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.747039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.747058 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.761873 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:46:57Z is after 2025-08-24T17:21:41Z" Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.762017 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.763999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.764029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.764046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.764068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.764080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.859632 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:57 crc kubenswrapper[4727]: E0109 10:46:57.859873 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.866998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.867047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.867059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.867079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.867098 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:57 crc kubenswrapper[4727]: I0109 10:46:57.970230 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:57Z","lastTransitionTime":"2026-01-09T10:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072967 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.072976 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.176162 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.278910 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.381843 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.484401 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.587857 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691308 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.691451 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.795112 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.859633 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.859674 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.859783 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:46:58 crc kubenswrapper[4727]: E0109 10:46:58.859998 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:46:58 crc kubenswrapper[4727]: E0109 10:46:58.860244 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:46:58 crc kubenswrapper[4727]: E0109 10:46:58.860293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.897490 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.999904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.999957 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:58 crc kubenswrapper[4727]: I0109 10:46:58.999969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:58.999990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.000004 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:58Z","lastTransitionTime":"2026-01-09T10:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.102964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.103014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.103024 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.103048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.103061 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.205749 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.308826 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411342 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.411693 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.514489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.515003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.515014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.515031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.515043 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.618242 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.720654 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.823781 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.859975 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:46:59 crc kubenswrapper[4727]: E0109 10:46:59.860187 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:46:59 crc kubenswrapper[4727]: I0109 10:46:59.926614 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:46:59Z","lastTransitionTime":"2026-01-09T10:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.029341 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.108854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.109085 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.109221 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:47:32.109191152 +0000 UTC m=+97.559095933 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.132385 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.235150 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.337633 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.440672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543976 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.543988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.648582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.753209 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.856980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.857045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.857056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.857074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.857102 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.860242 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.860329 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.860371 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.860399 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.860531 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:00 crc kubenswrapper[4727]: E0109 10:47:00.860647 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.959926 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.960014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.960028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.960070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:00 crc kubenswrapper[4727]: I0109 10:47:00.960085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:00Z","lastTransitionTime":"2026-01-09T10:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.063206 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.165921 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.165985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.165999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.166019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.166033 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.269314 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.344092 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/0.log" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.344543 4727 generic.go:334] "Generic (PLEG): container finished" podID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" containerID="a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec" exitCode=1 Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.344652 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerDied","Data":"a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.345197 4727 scope.go:117] "RemoveContainer" containerID="a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.362352 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.372871 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.378175 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.389497 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.406497 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.422005 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.438687 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.453631 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.468169 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.475287 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.481684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.493475 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.505770 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.517559 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.544854 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.559332 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.572251 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578409 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.578541 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.586501 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.598866 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:01Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.681629 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.785420 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.859681 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:01 crc kubenswrapper[4727]: E0109 10:47:01.859890 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.888897 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:01 crc kubenswrapper[4727]: I0109 10:47:01.992569 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:01Z","lastTransitionTime":"2026-01-09T10:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.095273 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.198275 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.300996 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.350743 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/0.log" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.350826 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.368850 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.383050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.398716 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.405219 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.412501 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.425651 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.440156 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.465368 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.479318 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.489197 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.503192 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508531 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.508652 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.519469 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.532090 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.549958 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.561941 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.581160 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.598223 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.610380 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:02Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611435 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.611477 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.714632 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818109 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.818123 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.859831 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.859956 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.859988 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:02 crc kubenswrapper[4727]: E0109 10:47:02.860122 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:02 crc kubenswrapper[4727]: E0109 10:47:02.860263 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:02 crc kubenswrapper[4727]: E0109 10:47:02.860382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:02 crc kubenswrapper[4727]: I0109 10:47:02.920793 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:02Z","lastTransitionTime":"2026-01-09T10:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.024147 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.126906 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.229372 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332560 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.332570 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435346 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.435406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.538207 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641225 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.641262 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.745317 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.847632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.847891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.847972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.848059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.848165 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.860075 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:03 crc kubenswrapper[4727]: E0109 10:47:03.860187 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951137 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:03 crc kubenswrapper[4727]: I0109 10:47:03.951166 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:03Z","lastTransitionTime":"2026-01-09T10:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.054663 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.158134 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.261674 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363755 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.363848 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466820 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.466960 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.569613 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.672365 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.775278 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.859990 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.860109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:04 crc kubenswrapper[4727]: E0109 10:47:04.860164 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.860224 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:04 crc kubenswrapper[4727]: E0109 10:47:04.860311 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:04 crc kubenswrapper[4727]: E0109 10:47:04.860534 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.878901 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.880461 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.896403 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.908449 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.924112 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.937864 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.958043 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.970373 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.981895 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:04Z","lastTransitionTime":"2026-01-09T10:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.983728 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:04 crc kubenswrapper[4727]: I0109 10:47:04.997657 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:04Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.013545 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.025619 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.037374 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.056158 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.071328 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.082677 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084168 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084208 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.084257 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.096001 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.111262 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:05Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.187762 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.290645 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.392840 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.495865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.495928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.495947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.495973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.496023 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598298 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.598328 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.701336 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.804287 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.859703 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:05 crc kubenswrapper[4727]: E0109 10:47:05.859928 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.906958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.907010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.907021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.907037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:05 crc kubenswrapper[4727]: I0109 10:47:05.907047 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:05Z","lastTransitionTime":"2026-01-09T10:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.009336 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.113319 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.215878 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.318974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.319031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.319040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.319061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.319074 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.421987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.422035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.422046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.422066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.422078 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.525721 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629113 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.629156 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.732956 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.733021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.733038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.733067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.733085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.836787 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.860157 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.860287 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:06 crc kubenswrapper[4727]: E0109 10:47:06.860362 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.860296 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:06 crc kubenswrapper[4727]: E0109 10:47:06.860618 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:06 crc kubenswrapper[4727]: E0109 10:47:06.861084 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.861450 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:47:06 crc kubenswrapper[4727]: E0109 10:47:06.861657 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939966 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:06 crc kubenswrapper[4727]: I0109 10:47:06.939977 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:06Z","lastTransitionTime":"2026-01-09T10:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.043675 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.147453 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.249953 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.353177 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.456362 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559052 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.559088 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662409 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.662422 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.766214 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.859318 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:07 crc kubenswrapper[4727]: E0109 10:47:07.859593 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.869866 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:07 crc kubenswrapper[4727]: I0109 10:47:07.972627 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:07Z","lastTransitionTime":"2026-01-09T10:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.045308 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.059555 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.065705 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.079826 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.085258 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.101069 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.112596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.125271 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.129833 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.143887 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:08Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.144041 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.146490 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.249741 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352881 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.352921 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.455675 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.558822 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.661784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.764849 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.859898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.859948 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.859924 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.860106 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.860251 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:08 crc kubenswrapper[4727]: E0109 10:47:08.860330 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.866666 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:08 crc kubenswrapper[4727]: I0109 10:47:08.969484 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:08Z","lastTransitionTime":"2026-01-09T10:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.073926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.177531 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.280303 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.383380 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.486152 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589182 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.589315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.692967 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.796394 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.859904 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:09 crc kubenswrapper[4727]: E0109 10:47:09.860150 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:09 crc kubenswrapper[4727]: I0109 10:47:09.898988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:09Z","lastTransitionTime":"2026-01-09T10:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.001981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.002038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.002049 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.002068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.002081 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.104867 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.227943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.227999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.228009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.228030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.228042 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.331346 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.434965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.538602 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.641589 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.745481 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.849104 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.859433 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.859501 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.859582 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:10 crc kubenswrapper[4727]: E0109 10:47:10.859627 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:10 crc kubenswrapper[4727]: E0109 10:47:10.859711 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:10 crc kubenswrapper[4727]: E0109 10:47:10.859793 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:10 crc kubenswrapper[4727]: I0109 10:47:10.952575 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:10Z","lastTransitionTime":"2026-01-09T10:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.055937 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.158835 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261368 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.261411 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.364628 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.468949 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.571824 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.674734 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.777991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.778037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.778046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.778062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.778074 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.859319 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:11 crc kubenswrapper[4727]: E0109 10:47:11.859592 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.880546 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:11 crc kubenswrapper[4727]: I0109 10:47:11.983963 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:11Z","lastTransitionTime":"2026-01-09T10:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.086619 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.188968 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.291953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.292017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.292030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.292053 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.292067 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.394446 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.497682 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.601613 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.704455 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.807914 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.859643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:12 crc kubenswrapper[4727]: E0109 10:47:12.859850 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.859665 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:12 crc kubenswrapper[4727]: E0109 10:47:12.859932 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.859643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:12 crc kubenswrapper[4727]: E0109 10:47:12.859981 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:12 crc kubenswrapper[4727]: I0109 10:47:12.912193 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:12Z","lastTransitionTime":"2026-01-09T10:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.019972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.020032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.020045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.020067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.020079 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.123665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.227436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.330566 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433296 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433349 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.433376 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.536910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.536963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.536973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.536992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.537016 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.640379 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.743979 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.847636 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.860004 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:13 crc kubenswrapper[4727]: E0109 10:47:13.860298 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950495 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:13 crc kubenswrapper[4727]: I0109 10:47:13.950539 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:13Z","lastTransitionTime":"2026-01-09T10:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.053905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.053986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.054006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.054029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.054041 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.157817 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.259753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.260202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.260324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.260456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.260582 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.363961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.364009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.364023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.364050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.364075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.466896 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.569874 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.672994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.673065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.673077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.673100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.673113 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.776143 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.859398 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.859499 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:14 crc kubenswrapper[4727]: E0109 10:47:14.859600 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.859643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:14 crc kubenswrapper[4727]: E0109 10:47:14.859743 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:14 crc kubenswrapper[4727]: E0109 10:47:14.859826 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.875757 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.878616 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.886893 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.909681 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.922000 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.936012 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.951548 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.965214 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981630 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.981934 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:14Z","lastTransitionTime":"2026-01-09T10:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:14 crc kubenswrapper[4727]: I0109 10:47:14.996544 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:14Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.009610 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.024844 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.040032 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.054071 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.067395 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.082604 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.084578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.084725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.084827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.084937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.085009 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.097753 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.110611 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:15Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.187816 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.289948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.290279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.290345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.290437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.290554 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.393807 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.496953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.497008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.497022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.497047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.497064 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.599818 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.702873 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.806248 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.860192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:15 crc kubenswrapper[4727]: E0109 10:47:15.860383 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:15 crc kubenswrapper[4727]: I0109 10:47:15.909985 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:15Z","lastTransitionTime":"2026-01-09T10:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.012930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116455 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.116469 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.220177 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.323630 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.427697 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.530754 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.633789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.634138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.634205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.634275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.634344 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.737540 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.840909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.840957 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.840968 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.840986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.841308 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.860284 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.860432 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.860605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.860479 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.860687 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.860828 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.876975 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.877146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.877230 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.87718594 +0000 UTC m=+146.327090711 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.877309 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.877406 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.877384166 +0000 UTC m=+146.327288947 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.944924 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:16Z","lastTransitionTime":"2026-01-09T10:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.978855 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.978922 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:16 crc kubenswrapper[4727]: I0109 10:47:16.978957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979084 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979145 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979169 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979185 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979221 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.979190934 +0000 UTC m=+146.429095865 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979253 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.979242546 +0000 UTC m=+146.429147327 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979594 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979645 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979659 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:47:16 crc kubenswrapper[4727]: E0109 10:47:16.979737 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:20.979716882 +0000 UTC m=+146.429621663 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.047714 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.150427 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.254205 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.356886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.356960 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.356982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.357003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.357013 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.459600 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.562479 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.664885 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.767573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.859936 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:17 crc kubenswrapper[4727]: E0109 10:47:17.860471 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.860851 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.870177 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972686 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:17 crc kubenswrapper[4727]: I0109 10:47:17.972698 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:17Z","lastTransitionTime":"2026-01-09T10:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.075864 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.178906 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.217159 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.231904 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.236778 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.251228 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257525 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.257573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.271043 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.281561 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.308124 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.316315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.332423 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.332626 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.334991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.335037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.335047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.335064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.335075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.412553 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/2.log" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.414791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.415815 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.434224 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.438565 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.452065 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.470542 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.481834 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.496976 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.512617 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.526692 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.540890 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.546958 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.563799 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.580750 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.594772 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.611139 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.623589 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.637110 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643156 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.643188 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.652120 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.664309 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.685562 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746686 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.746698 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849294 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.849397 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.859803 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.859836 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.859837 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.860041 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.860517 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:18 crc kubenswrapper[4727]: E0109 10:47:18.861201 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.873931 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:18 crc kubenswrapper[4727]: I0109 10:47:18.952325 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:18Z","lastTransitionTime":"2026-01-09T10:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055339 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055349 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055367 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.055376 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.158309 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.260965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.363545 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.420887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.421651 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/2.log" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.424402 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" exitCode=1 Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.424528 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.424624 4727 scope.go:117] "RemoveContainer" containerID="77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.425341 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:19 crc kubenswrapper[4727]: E0109 10:47:19.425572 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.439400 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.450447 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.464192 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.466101 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.477814 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.492548 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.505811 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.516989 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.529739 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.542050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.555050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.566054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.568936 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.578220 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.591011 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.600596 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"508776d9-843b-4648-a88f-d24f2cffd832\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.614084 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.626413 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.635596 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.654699 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77ac97c6881fa81f377bfd1d5de19559332dca85a02f23e406f9a7fdf277e4d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:46:50Z\\\",\\\"message\\\":\\\"ent-go/informers/factory.go:160\\\\nI0109 10:46:49.713949 6446 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.713985 6446 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0109 10:46:49.714588 6446 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0109 10:46:49.718339 6446 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0109 10:46:49.718360 6446 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0109 10:46:49.720274 6446 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0109 10:46:49.720403 6446 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0109 10:46:49.720406 6446 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0109 10:46:49.720437 6446 factory.go:656] Stopping watch factory\\\\nI0109 10:46:49.720452 6446 handler.go:208] Removed *v1.Node event handler 2\\\\nI0109 10:46:49.720466 6446 ovnkube.go:599] Stopped ovnkube\\\\nI0109 10:46:49.720523 6446 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0109 10:46:49.720655 6446 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:47:18.843109 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0109 10:47:18.843115 6892 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nI0109 10:47:18.843123 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nF0109 10:47:18.843123 6892 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:19Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.671972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.672035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.672047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.672126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.672175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.775667 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.859254 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:19 crc kubenswrapper[4727]: E0109 10:47:19.859408 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.878331 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:19 crc kubenswrapper[4727]: I0109 10:47:19.980883 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:19Z","lastTransitionTime":"2026-01-09T10:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.083730 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.185869 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.289175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.391875 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.430847 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.435275 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:20 crc kubenswrapper[4727]: E0109 10:47:20.435445 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.454637 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.470409 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.483824 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494670 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.494738 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.499943 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.514488 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.528358 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"508776d9-843b-4648-a88f-d24f2cffd832\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.543254 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.556134 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.570362 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.583230 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.595326 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.602871 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.615710 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.627344 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.647746 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:47:18.843109 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0109 10:47:18.843115 6892 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nI0109 10:47:18.843123 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nF0109 10:47:18.843123 6892 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.660543 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.674183 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.687353 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.700149 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:20Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705353 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.705407 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.808461 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.859898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.860028 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:20 crc kubenswrapper[4727]: E0109 10:47:20.860108 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:20 crc kubenswrapper[4727]: E0109 10:47:20.860302 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.860478 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:20 crc kubenswrapper[4727]: E0109 10:47:20.860660 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:20 crc kubenswrapper[4727]: I0109 10:47:20.911728 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:20Z","lastTransitionTime":"2026-01-09T10:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.015241 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117846 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.117872 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220529 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.220666 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.324224 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.427562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.531241 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.634913 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.737822 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841073 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.841116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.860253 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:21 crc kubenswrapper[4727]: E0109 10:47:21.860734 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:21 crc kubenswrapper[4727]: I0109 10:47:21.944608 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:21Z","lastTransitionTime":"2026-01-09T10:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047120 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.047212 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150679 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.150743 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.253894 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.357436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.460260 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562921 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.562932 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.666600 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.769882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.770335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.770554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.770711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.770853 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.860131 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.860275 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:22 crc kubenswrapper[4727]: E0109 10:47:22.860330 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.860363 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:22 crc kubenswrapper[4727]: E0109 10:47:22.861411 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:22 crc kubenswrapper[4727]: E0109 10:47:22.861844 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.873450 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976934 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:22 crc kubenswrapper[4727]: I0109 10:47:22.976953 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:22Z","lastTransitionTime":"2026-01-09T10:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080231 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.080260 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183597 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.183643 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.287560 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.390421 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.493870 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.597343 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.699975 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.802545 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.859812 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:23 crc kubenswrapper[4727]: E0109 10:47:23.860065 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:23 crc kubenswrapper[4727]: I0109 10:47:23.906193 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:23Z","lastTransitionTime":"2026-01-09T10:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.009736 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.113922 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.217684 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.321419 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.424251 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.528324 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.631651 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.735321 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.838876 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.860270 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.860311 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.860396 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:24 crc kubenswrapper[4727]: E0109 10:47:24.860463 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:24 crc kubenswrapper[4727]: E0109 10:47:24.860670 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:24 crc kubenswrapper[4727]: E0109 10:47:24.860879 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.880610 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.894065 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.912318 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:47:18.843109 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0109 10:47:18.843115 6892 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nI0109 10:47:18.843123 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nF0109 10:47:18.843123 6892 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.925986 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942124 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:24Z","lastTransitionTime":"2026-01-09T10:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.942118 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.962500 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.976848 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:24 crc kubenswrapper[4727]: I0109 10:47:24.991177 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:24Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.003492 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.016840 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.031959 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.044972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.045036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.045048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.045068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.045081 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.047540 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.058056 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"508776d9-843b-4648-a88f-d24f2cffd832\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.069684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.080973 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.093469 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.103663 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.114684 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:25Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.148314 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.251942 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.354229 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.458251 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.561594 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.664645 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767490 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.767541 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.859560 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:25 crc kubenswrapper[4727]: E0109 10:47:25.859916 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.870228 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:25 crc kubenswrapper[4727]: I0109 10:47:25.973406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:25Z","lastTransitionTime":"2026-01-09T10:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.076610 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.179579 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.282956 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.283023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.283043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.283067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.283088 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.385910 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.489270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591134 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.591219 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.693879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.693946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.693964 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.693988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.694004 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796915 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.796948 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.859571 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.859571 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:26 crc kubenswrapper[4727]: E0109 10:47:26.859745 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:26 crc kubenswrapper[4727]: E0109 10:47:26.859788 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.859584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:26 crc kubenswrapper[4727]: E0109 10:47:26.859866 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:26 crc kubenswrapper[4727]: I0109 10:47:26.900323 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:26Z","lastTransitionTime":"2026-01-09T10:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003544 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.003646 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.106702 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.210783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.210862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.210971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.210999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.211014 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.314222 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.416739 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519315 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.519326 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.623250 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.725902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.725963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.725972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.725992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.726003 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.828928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.828990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.829006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.829029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.829041 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.859345 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:27 crc kubenswrapper[4727]: E0109 10:47:27.859587 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:27 crc kubenswrapper[4727]: I0109 10:47:27.933438 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:27Z","lastTransitionTime":"2026-01-09T10:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.036576 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.139886 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.242573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345025 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.345175 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.448824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.448927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.448953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.448987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.449014 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493934 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.493965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.508023 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.513187 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.528665 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.533606 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.548935 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.559784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.572549 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.576979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.577034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.577047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.577071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.577086 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.591766 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efb1b54a-bec3-40af-877b-b80c0cec5403\\\",\\\"systemUUID\\\":\\\"a4360e9d-d030-43eb-b040-259eb77bd39d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:28Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.591886 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594120 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.594262 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.697383 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.800965 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.859492 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.859537 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.859728 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.859807 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.859919 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:28 crc kubenswrapper[4727]: E0109 10:47:28.860030 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:28 crc kubenswrapper[4727]: I0109 10:47:28.904422 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:28Z","lastTransitionTime":"2026-01-09T10:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.006760 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.109152 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.212146 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.315214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.315626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.315840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.316035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.316233 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421343 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.421430 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.523748 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626432 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.626442 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.730152 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833125 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.833163 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.859409 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:29 crc kubenswrapper[4727]: E0109 10:47:29.859600 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:29 crc kubenswrapper[4727]: I0109 10:47:29.936331 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:29Z","lastTransitionTime":"2026-01-09T10:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.039946 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.142866 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.245740 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.348167 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.451374 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554296 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.554311 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657693 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.657782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.760491 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.859879 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.859892 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:30 crc kubenswrapper[4727]: E0109 10:47:30.860110 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.859914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:30 crc kubenswrapper[4727]: E0109 10:47:30.860354 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:30 crc kubenswrapper[4727]: E0109 10:47:30.860604 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.862474 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965961 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:30 crc kubenswrapper[4727]: I0109 10:47:30.965974 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:30Z","lastTransitionTime":"2026-01-09T10:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.068939 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.172243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.172677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.172796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.172890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.173000 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.275924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.275984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.275996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.276016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.276030 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.379241 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.482672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.585965 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.586013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.586026 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.586045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.586059 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.689941 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.792900 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.859941 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:31 crc kubenswrapper[4727]: E0109 10:47:31.860170 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896955 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:31 crc kubenswrapper[4727]: I0109 10:47:31.896967 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:31Z","lastTransitionTime":"2026-01-09T10:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000210 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.000283 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102755 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.102784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.148667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.148831 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.148920 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs podName:6a29665a-01da-4439-b13d-3950bf573044 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.148893841 +0000 UTC m=+161.598798622 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs") pod "network-metrics-daemon-vhsj4" (UID: "6a29665a-01da-4439-b13d-3950bf573044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.205800 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.308804 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.412130 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.515550 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.618665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.721878 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.825386 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.860254 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.860482 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.860550 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.860671 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.860857 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:32 crc kubenswrapper[4727]: E0109 10:47:32.860991 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:32 crc kubenswrapper[4727]: I0109 10:47:32.927908 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:32Z","lastTransitionTime":"2026-01-09T10:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.030838 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.134577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.134869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.134946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.135047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.135130 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.238463 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.341661 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.444274 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.546971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.547372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.547474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.547606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.547699 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.651651 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.754814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.755237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.755319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.755434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.755527 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.858637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859284 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.859362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:33 crc kubenswrapper[4727]: E0109 10:47:33.859614 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.860584 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:33 crc kubenswrapper[4727]: E0109 10:47:33.860854 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962294 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:33 crc kubenswrapper[4727]: I0109 10:47:33.962347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:33Z","lastTransitionTime":"2026-01-09T10:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.066776 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.170248 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.273388 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.376788 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.479344 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.581936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.582001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.582020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.582045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.582064 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684972 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.684994 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.788340 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.860021 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.860197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:34 crc kubenswrapper[4727]: E0109 10:47:34.860381 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.860415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:34 crc kubenswrapper[4727]: E0109 10:47:34.860631 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:34 crc kubenswrapper[4727]: E0109 10:47:34.860769 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.881052 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47ddefcf-2547-42c6-b4a0-a4b0e3829c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c493e43726e0b77e5f571b323522bc11b8192e9b22748fa29f1b64d697c3d6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8649e05be10da20c0ef86e37e22a0973b8f89e2a4a1b267da9da872c166b651\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://351444743fbf6afd8d0b92287ff3c882fae0c42d61fbfe101a7f0efc2e249ba0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.891811 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.895243 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"508776d9-843b-4648-a88f-d24f2cffd832\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d1f2c7e2be487e53b49b5f9b056af5b37f0051cd2929fab5f148ff00063d2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://409dabbcc7e9f910ebce53d884033a06cebde38fd091966c0fb99b1e111d1421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.912899 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa9fbbda22b429720db7b11fcf31fe20d71226c4cada3daa82e11622a25a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.927102 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e06e472fd9b1ed168eeb279bbb2d9485e9c11d14d5c1c754a0a542f172f66f29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.942205 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-57zpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0230d78-c2b3-4a02-8243-6b39e8eecb90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:00Z\\\",\\\"message\\\":\\\"2026-01-09T10:46:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03\\\\n2026-01-09T10:46:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b17b980e-abbc-4c55-988e-f967db74fd03 to /host/opt/cni/bin/\\\\n2026-01-09T10:46:15Z [verbose] multus-daemon started\\\\n2026-01-09T10:46:15Z [verbose] Readiness Indicator file check\\\\n2026-01-09T10:47:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h2wkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-57zpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.957963 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hg5sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32de8b71-676d-47ed-a5e4-48737247937e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3eb0c9f249c1170f2c75f7215b63c3d959a83b793aa194a45db5fcf69b12a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xgjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hg5sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.971357 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50be6d5b-675b-4837-ba20-6d6c75a363d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6be452648c61d47e336328cb8a78e6901899501436ccc18b7162bbf73c23e79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28513f76fce54e7508f658ac0acdbab96fa85820e361fcb3faea1d56131279b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9l5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h9pvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.985022 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:34 crc kubenswrapper[4727]: I0109 10:47:34.994855 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:34Z","lastTransitionTime":"2026-01-09T10:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.000196 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlpv5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d335f7f5-7ede-4146-9ecc-f0718b547d43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a4974f7ad7aca7004784a6fbd174c60e6fa1cd1d9ac9f87d5882fd5bd9233c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgrfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlpv5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:34Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.021600 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-09T10:47:18Z\\\",\\\"message\\\":\\\"}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0109 10:47:18.843109 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0109 10:47:18.843115 6892 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nI0109 10:47:18.843123 6892 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-hzdp7\\\\nF0109 10:47:18.843123 6892 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:18Z is after 2025-08-24T17:21:41Z]\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:47:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4rgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ngngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.035802 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a77ec7ba-891c-40b7-96f3-af128b6047ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7758f903fa144960847199add7388817a1f6a2e79ed6d8a56be6b5ca1cb5d695\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://178068ab8f3a3004fe8239cf76d09f9d8c4fe16a21b5f030c0af53f55a175ab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05ae08e6618c9d47364043a297cc090ae3e4c986a420dd980fbdae8a10c6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72fc1f11c0bc10fa9f94cc087774c7d5ac3b3bd67fb7e6fb60b5e8567adc820f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.048756 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.064100 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d482831d4f684c7220bfcd1c83ccf1e11ddf72ffe718bdfab02f5dce0d4131f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d722633429e6c494abc3775549715c6b129897f4ec520c18a217554816bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.077158 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a29665a-01da-4439-b13d-3950bf573044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mkzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vhsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.097784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.098079 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:45:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0109 10:46:12.315472 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0109 10:46:12.315644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0109 10:46:12.318769 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3420513502/tls.crt::/tmp/serving-cert-3420513502/tls.key\\\\\\\"\\\\nI0109 10:46:12.949937 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0109 10:46:12.954967 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0109 10:46:12.955008 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0109 10:46:12.955057 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0109 10:46:12.955064 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0109 10:46:12.960532 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0109 10:46:12.960557 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960562 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0109 10:46:12.960566 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0109 10:46:12.960570 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0109 10:46:12.960573 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0109 10:46:12.960576 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0109 10:46:12.960580 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0109 10:46:12.968090 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:45:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:45:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:45:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:45:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.113600 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.128802 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea573637-1ca1-4211-8c88-9bc9fa78d6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa8a634d443879534a3005f3f5226a0b6d48d48c07b8de850f4a6ffb492b40ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ktz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hzdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.149015 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3694c5b-19cf-464e-90b7-8e719d3a0d11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-09T10:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e463318a45806b31b5c7d03421d6f78f22a0d7a4e03fc53e85887acbdd65f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-09T10:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7be424f34318e423598e3e96bf75aef02cc97f384ef2bcc4d2ee75aebd880e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5dac52907f52f3935ce1d525bd1f236d1df3a94cafd89818bb28a0a9e5cbfad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173d7e09badf53be2fe228d00fbdf6dd948ce145fba66a6a46904b5e7ecbff04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a5927d3555b5b454ed42ac3e9a95c2e593c0b73815e60135c9e082cdd6079b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b60458b153bfd13bf70fce7adcccd4a702fe1eed64e0b1c08d45b7cff64f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616a96c5c01ad00be1e23cd98efce97cf470fe10d859d8c304ce263fe1047a7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:46:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rp9j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-09T10:46:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7sgfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-09T10:47:35Z is after 2025-08-24T17:21:41Z" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200348 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.200377 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.303872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.303953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.303963 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.303985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.304002 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.406992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.407461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.407481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.407538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.407556 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.510649 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.613899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.716368 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.819888 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.860338 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:35 crc kubenswrapper[4727]: E0109 10:47:35.860683 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:35 crc kubenswrapper[4727]: I0109 10:47:35.922828 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:35Z","lastTransitionTime":"2026-01-09T10:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.026431 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.129711 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.232798 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.335911 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.438484 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541560 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.541604 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.644451 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.746970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.747027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.747039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.747058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.747069 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.850315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.859841 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.859973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:36 crc kubenswrapper[4727]: E0109 10:47:36.860040 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.859841 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:36 crc kubenswrapper[4727]: E0109 10:47:36.860293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:36 crc kubenswrapper[4727]: E0109 10:47:36.860353 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:36 crc kubenswrapper[4727]: I0109 10:47:36.952950 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:36Z","lastTransitionTime":"2026-01-09T10:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.055921 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.162387 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.266673 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.369771 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.473486 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.576289 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.679774 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783547 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.783640 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.859794 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:37 crc kubenswrapper[4727]: E0109 10:47:37.859985 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.887333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:37 crc kubenswrapper[4727]: I0109 10:47:37.991782 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:37Z","lastTransitionTime":"2026-01-09T10:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.095223 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.198937 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.302690 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405958 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.405970 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.509142 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.611744 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.715474 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.819209 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.859794 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:38 crc kubenswrapper[4727]: E0109 10:47:38.859937 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.859816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:38 crc kubenswrapper[4727]: E0109 10:47:38.860009 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.859800 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:38 crc kubenswrapper[4727]: E0109 10:47:38.860322 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.861239 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-09T10:47:38Z","lastTransitionTime":"2026-01-09T10:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.920556 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw"] Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.921211 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.923493 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.923578 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.925158 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.925201 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 09 10:47:38 crc kubenswrapper[4727]: I0109 10:47:38.980200 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qlpv5" podStartSLOduration=85.980175555 podStartE2EDuration="1m25.980175555s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:38.959058435 +0000 UTC m=+104.408963216" watchObservedRunningTime="2026-01-09 10:47:38.980175555 +0000 UTC m=+104.430080336" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.028692 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.028666526 podStartE2EDuration="55.028666526s" podCreationTimestamp="2026-01-09 10:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.012245908 +0000 UTC m=+104.462150709" watchObservedRunningTime="2026-01-09 10:47:39.028666526 +0000 UTC m=+104.478571307" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031470 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031660 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031759 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.031940 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.077486 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.077458108 podStartE2EDuration="1m27.077458108s" podCreationTimestamp="2026-01-09 10:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.076953702 +0000 UTC m=+104.526858493" watchObservedRunningTime="2026-01-09 10:47:39.077458108 +0000 UTC m=+104.527362879" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.122568 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podStartSLOduration=86.1225448 podStartE2EDuration="1m26.1225448s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.106143442 +0000 UTC m=+104.556048253" watchObservedRunningTime="2026-01-09 10:47:39.1225448 +0000 UTC m=+104.572449601" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133078 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133166 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133790 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.133935 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.134572 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.138823 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7sgfm" podStartSLOduration=86.138801734 podStartE2EDuration="1m26.138801734s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.124454462 +0000 UTC m=+104.574359263" watchObservedRunningTime="2026-01-09 10:47:39.138801734 +0000 UTC m=+104.588706515" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.142332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.156714 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f12ea37-6ec6-49d9-8870-27b7f320fa1a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sfwkw\" (UID: \"8f12ea37-6ec6-49d9-8870-27b7f320fa1a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.158958 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.158938662 podStartE2EDuration="1m27.158938662s" podCreationTimestamp="2026-01-09 10:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.15887496 +0000 UTC m=+104.608779761" watchObservedRunningTime="2026-01-09 10:47:39.158938662 +0000 UTC m=+104.608843443" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.171730 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.171707133 podStartE2EDuration="21.171707133s" podCreationTimestamp="2026-01-09 10:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.1709787 +0000 UTC m=+104.620883481" watchObservedRunningTime="2026-01-09 10:47:39.171707133 +0000 UTC m=+104.621611904" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.211058 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-57zpr" podStartSLOduration=86.21102918 podStartE2EDuration="1m26.21102918s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.210283736 +0000 UTC m=+104.660188527" watchObservedRunningTime="2026-01-09 10:47:39.21102918 +0000 UTC m=+104.660933961" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.222436 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hg5sh" podStartSLOduration=86.222408557 podStartE2EDuration="1m26.222408557s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.221184557 +0000 UTC m=+104.671089348" watchObservedRunningTime="2026-01-09 10:47:39.222408557 +0000 UTC m=+104.672313338" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.234452 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h9pvg" podStartSLOduration=85.234425783 podStartE2EDuration="1m25.234425783s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.233737561 +0000 UTC m=+104.683642352" watchObservedRunningTime="2026-01-09 10:47:39.234425783 +0000 UTC m=+104.684330564" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.237973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.504134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" event={"ID":"8f12ea37-6ec6-49d9-8870-27b7f320fa1a","Type":"ContainerStarted","Data":"b6b3d6c929e00da3855deb688b967f6cf7cf7aa03befb8e5e7f646aea8e801ca"} Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.504203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" event={"ID":"8f12ea37-6ec6-49d9-8870-27b7f320fa1a","Type":"ContainerStarted","Data":"c4ab4ad56d9565ca1854ad66c2c2ff886669688a180d121c145b23dec5e1334a"} Jan 09 10:47:39 crc kubenswrapper[4727]: I0109 10:47:39.859902 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:39 crc kubenswrapper[4727]: E0109 10:47:39.860085 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:40 crc kubenswrapper[4727]: I0109 10:47:40.860115 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:40 crc kubenswrapper[4727]: I0109 10:47:40.860204 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:40 crc kubenswrapper[4727]: E0109 10:47:40.860395 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:40 crc kubenswrapper[4727]: I0109 10:47:40.860428 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:40 crc kubenswrapper[4727]: E0109 10:47:40.860613 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:40 crc kubenswrapper[4727]: E0109 10:47:40.860682 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:41 crc kubenswrapper[4727]: I0109 10:47:41.859698 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:41 crc kubenswrapper[4727]: E0109 10:47:41.860063 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:41 crc kubenswrapper[4727]: I0109 10:47:41.876491 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sfwkw" podStartSLOduration=88.876469415 podStartE2EDuration="1m28.876469415s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:39.518996709 +0000 UTC m=+104.968901490" watchObservedRunningTime="2026-01-09 10:47:41.876469415 +0000 UTC m=+107.326374196" Jan 09 10:47:41 crc kubenswrapper[4727]: I0109 10:47:41.876731 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 09 10:47:42 crc kubenswrapper[4727]: I0109 10:47:42.859654 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:42 crc kubenswrapper[4727]: I0109 10:47:42.859834 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:42 crc kubenswrapper[4727]: E0109 10:47:42.859856 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:42 crc kubenswrapper[4727]: I0109 10:47:42.859607 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:42 crc kubenswrapper[4727]: E0109 10:47:42.860976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:42 crc kubenswrapper[4727]: E0109 10:47:42.860992 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:43 crc kubenswrapper[4727]: I0109 10:47:43.859559 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:43 crc kubenswrapper[4727]: E0109 10:47:43.859711 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:44 crc kubenswrapper[4727]: I0109 10:47:44.860207 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:44 crc kubenswrapper[4727]: I0109 10:47:44.860242 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:44 crc kubenswrapper[4727]: E0109 10:47:44.862167 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:44 crc kubenswrapper[4727]: I0109 10:47:44.862188 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:44 crc kubenswrapper[4727]: E0109 10:47:44.862237 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:44 crc kubenswrapper[4727]: E0109 10:47:44.862307 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:44 crc kubenswrapper[4727]: I0109 10:47:44.889736 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.889710192 podStartE2EDuration="3.889710192s" podCreationTimestamp="2026-01-09 10:47:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:47:44.887979987 +0000 UTC m=+110.337884758" watchObservedRunningTime="2026-01-09 10:47:44.889710192 +0000 UTC m=+110.339614973" Jan 09 10:47:45 crc kubenswrapper[4727]: I0109 10:47:45.860097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:45 crc kubenswrapper[4727]: E0109 10:47:45.861273 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:45 crc kubenswrapper[4727]: I0109 10:47:45.861144 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:45 crc kubenswrapper[4727]: E0109 10:47:45.861671 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:46 crc kubenswrapper[4727]: I0109 10:47:46.859416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:46 crc kubenswrapper[4727]: I0109 10:47:46.859529 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:46 crc kubenswrapper[4727]: I0109 10:47:46.859575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:46 crc kubenswrapper[4727]: E0109 10:47:46.859627 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:46 crc kubenswrapper[4727]: E0109 10:47:46.859737 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:46 crc kubenswrapper[4727]: E0109 10:47:46.859967 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532011 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/1.log" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532770 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/0.log" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532834 4727 generic.go:334] "Generic (PLEG): container finished" podID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" containerID="82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd" exitCode=1 Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532871 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerDied","Data":"82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd"} Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.532908 4727 scope.go:117] "RemoveContainer" containerID="a0b9ea879a6b9646432f704ebfebe6875435a18dedb405d722df8f72d31ed9ec" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.533624 4727 scope.go:117] "RemoveContainer" containerID="82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd" Jan 09 10:47:47 crc kubenswrapper[4727]: E0109 10:47:47.533892 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-57zpr_openshift-multus(f0230d78-c2b3-4a02-8243-6b39e8eecb90)\"" pod="openshift-multus/multus-57zpr" podUID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" Jan 09 10:47:47 crc kubenswrapper[4727]: I0109 10:47:47.860022 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:47 crc kubenswrapper[4727]: E0109 10:47:47.860639 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:48 crc kubenswrapper[4727]: I0109 10:47:48.537348 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/1.log" Jan 09 10:47:48 crc kubenswrapper[4727]: I0109 10:47:48.859490 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:48 crc kubenswrapper[4727]: E0109 10:47:48.859650 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:48 crc kubenswrapper[4727]: I0109 10:47:48.859712 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:48 crc kubenswrapper[4727]: I0109 10:47:48.859754 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:48 crc kubenswrapper[4727]: E0109 10:47:48.860043 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:48 crc kubenswrapper[4727]: E0109 10:47:48.860158 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:49 crc kubenswrapper[4727]: I0109 10:47:49.859345 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:49 crc kubenswrapper[4727]: E0109 10:47:49.859556 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:50 crc kubenswrapper[4727]: I0109 10:47:50.860193 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:50 crc kubenswrapper[4727]: I0109 10:47:50.860284 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:50 crc kubenswrapper[4727]: E0109 10:47:50.861372 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:50 crc kubenswrapper[4727]: I0109 10:47:50.860355 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:50 crc kubenswrapper[4727]: E0109 10:47:50.861732 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:50 crc kubenswrapper[4727]: E0109 10:47:50.861893 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:51 crc kubenswrapper[4727]: I0109 10:47:51.859542 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:51 crc kubenswrapper[4727]: E0109 10:47:51.859752 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:52 crc kubenswrapper[4727]: I0109 10:47:52.859664 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:52 crc kubenswrapper[4727]: I0109 10:47:52.859783 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:52 crc kubenswrapper[4727]: I0109 10:47:52.859808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:52 crc kubenswrapper[4727]: E0109 10:47:52.859963 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:52 crc kubenswrapper[4727]: E0109 10:47:52.859801 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:52 crc kubenswrapper[4727]: E0109 10:47:52.860132 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:53 crc kubenswrapper[4727]: I0109 10:47:53.860252 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:53 crc kubenswrapper[4727]: E0109 10:47:53.860464 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.807959 4727 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 09 10:47:54 crc kubenswrapper[4727]: I0109 10:47:54.859845 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:54 crc kubenswrapper[4727]: I0109 10:47:54.859944 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:54 crc kubenswrapper[4727]: I0109 10:47:54.860094 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.861005 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.861222 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.861336 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:54 crc kubenswrapper[4727]: E0109 10:47:54.949922 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:47:55 crc kubenswrapper[4727]: I0109 10:47:55.859594 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:55 crc kubenswrapper[4727]: E0109 10:47:55.860481 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:56 crc kubenswrapper[4727]: I0109 10:47:56.860410 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:56 crc kubenswrapper[4727]: I0109 10:47:56.860558 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:56 crc kubenswrapper[4727]: E0109 10:47:56.860657 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:56 crc kubenswrapper[4727]: I0109 10:47:56.860695 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:56 crc kubenswrapper[4727]: E0109 10:47:56.860837 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:56 crc kubenswrapper[4727]: E0109 10:47:56.861031 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:56 crc kubenswrapper[4727]: I0109 10:47:56.862105 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:47:56 crc kubenswrapper[4727]: E0109 10:47:56.862290 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ngngm_openshift-ovn-kubernetes(33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" Jan 09 10:47:57 crc kubenswrapper[4727]: I0109 10:47:57.859468 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:57 crc kubenswrapper[4727]: E0109 10:47:57.859702 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:58 crc kubenswrapper[4727]: I0109 10:47:58.860132 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:47:58 crc kubenswrapper[4727]: I0109 10:47:58.860214 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:47:58 crc kubenswrapper[4727]: E0109 10:47:58.860289 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:47:58 crc kubenswrapper[4727]: I0109 10:47:58.860394 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:47:58 crc kubenswrapper[4727]: E0109 10:47:58.860446 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:47:58 crc kubenswrapper[4727]: E0109 10:47:58.860646 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:47:59 crc kubenswrapper[4727]: I0109 10:47:59.859918 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:47:59 crc kubenswrapper[4727]: E0109 10:47:59.860272 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:47:59 crc kubenswrapper[4727]: I0109 10:47:59.860391 4727 scope.go:117] "RemoveContainer" containerID="82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd" Jan 09 10:47:59 crc kubenswrapper[4727]: E0109 10:47:59.950908 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.582793 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/1.log" Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.582882 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08"} Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.859826 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.859975 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:00 crc kubenswrapper[4727]: I0109 10:48:00.860072 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:00 crc kubenswrapper[4727]: E0109 10:48:00.860157 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:00 crc kubenswrapper[4727]: E0109 10:48:00.860084 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:00 crc kubenswrapper[4727]: E0109 10:48:00.860271 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:01 crc kubenswrapper[4727]: I0109 10:48:01.859955 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:01 crc kubenswrapper[4727]: E0109 10:48:01.860134 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:02 crc kubenswrapper[4727]: I0109 10:48:02.859428 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:02 crc kubenswrapper[4727]: I0109 10:48:02.859498 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:02 crc kubenswrapper[4727]: I0109 10:48:02.859706 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:02 crc kubenswrapper[4727]: E0109 10:48:02.859687 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:02 crc kubenswrapper[4727]: E0109 10:48:02.859838 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:02 crc kubenswrapper[4727]: E0109 10:48:02.859938 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:03 crc kubenswrapper[4727]: I0109 10:48:03.859296 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:03 crc kubenswrapper[4727]: E0109 10:48:03.859802 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:04 crc kubenswrapper[4727]: I0109 10:48:04.859853 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:04 crc kubenswrapper[4727]: I0109 10:48:04.861365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:04 crc kubenswrapper[4727]: I0109 10:48:04.861392 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:04 crc kubenswrapper[4727]: E0109 10:48:04.861435 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:04 crc kubenswrapper[4727]: E0109 10:48:04.861526 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:04 crc kubenswrapper[4727]: E0109 10:48:04.861751 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:04 crc kubenswrapper[4727]: E0109 10:48:04.951764 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:48:05 crc kubenswrapper[4727]: I0109 10:48:05.859813 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:05 crc kubenswrapper[4727]: E0109 10:48:05.859982 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:06 crc kubenswrapper[4727]: I0109 10:48:06.859897 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:06 crc kubenswrapper[4727]: I0109 10:48:06.859968 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:06 crc kubenswrapper[4727]: E0109 10:48:06.860136 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:06 crc kubenswrapper[4727]: I0109 10:48:06.860230 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:06 crc kubenswrapper[4727]: E0109 10:48:06.860258 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:06 crc kubenswrapper[4727]: E0109 10:48:06.860438 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:07 crc kubenswrapper[4727]: I0109 10:48:07.860074 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:07 crc kubenswrapper[4727]: E0109 10:48:07.860249 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:08 crc kubenswrapper[4727]: I0109 10:48:08.859776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:08 crc kubenswrapper[4727]: I0109 10:48:08.859771 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:08 crc kubenswrapper[4727]: E0109 10:48:08.859968 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:08 crc kubenswrapper[4727]: I0109 10:48:08.859799 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:08 crc kubenswrapper[4727]: E0109 10:48:08.860072 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:08 crc kubenswrapper[4727]: E0109 10:48:08.860122 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:09 crc kubenswrapper[4727]: I0109 10:48:09.860141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:09 crc kubenswrapper[4727]: E0109 10:48:09.860359 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:09 crc kubenswrapper[4727]: E0109 10:48:09.953749 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:48:10 crc kubenswrapper[4727]: I0109 10:48:10.860333 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:10 crc kubenswrapper[4727]: I0109 10:48:10.860357 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:10 crc kubenswrapper[4727]: E0109 10:48:10.860576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:10 crc kubenswrapper[4727]: I0109 10:48:10.860381 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:10 crc kubenswrapper[4727]: E0109 10:48:10.860711 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:10 crc kubenswrapper[4727]: E0109 10:48:10.860827 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:11 crc kubenswrapper[4727]: I0109 10:48:11.859934 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:11 crc kubenswrapper[4727]: E0109 10:48:11.860771 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:11 crc kubenswrapper[4727]: I0109 10:48:11.861450 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.627043 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.630861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerStarted","Data":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.631430 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.662840 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podStartSLOduration=118.662811521 podStartE2EDuration="1m58.662811521s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:12.662372168 +0000 UTC m=+138.112276969" watchObservedRunningTime="2026-01-09 10:48:12.662811521 +0000 UTC m=+138.112716322" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.860032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.860079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:12 crc kubenswrapper[4727]: E0109 10:48:12.860223 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.860258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:12 crc kubenswrapper[4727]: E0109 10:48:12.860415 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:12 crc kubenswrapper[4727]: E0109 10:48:12.860500 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.945119 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vhsj4"] Jan 09 10:48:12 crc kubenswrapper[4727]: I0109 10:48:12.945258 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:12 crc kubenswrapper[4727]: E0109 10:48:12.945372 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:14 crc kubenswrapper[4727]: I0109 10:48:14.860298 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:14 crc kubenswrapper[4727]: I0109 10:48:14.860298 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:14 crc kubenswrapper[4727]: I0109 10:48:14.860387 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:14 crc kubenswrapper[4727]: I0109 10:48:14.860483 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.862064 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.862590 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.862809 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.863202 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:14 crc kubenswrapper[4727]: E0109 10:48:14.954683 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 09 10:48:16 crc kubenswrapper[4727]: I0109 10:48:16.860279 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:16 crc kubenswrapper[4727]: I0109 10:48:16.860301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:16 crc kubenswrapper[4727]: E0109 10:48:16.861280 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:16 crc kubenswrapper[4727]: I0109 10:48:16.860353 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:16 crc kubenswrapper[4727]: I0109 10:48:16.860328 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:16 crc kubenswrapper[4727]: E0109 10:48:16.861802 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:16 crc kubenswrapper[4727]: E0109 10:48:16.861685 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:16 crc kubenswrapper[4727]: E0109 10:48:16.861492 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:18 crc kubenswrapper[4727]: I0109 10:48:18.860453 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:18 crc kubenswrapper[4727]: I0109 10:48:18.860484 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:18 crc kubenswrapper[4727]: E0109 10:48:18.861167 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vhsj4" podUID="6a29665a-01da-4439-b13d-3950bf573044" Jan 09 10:48:18 crc kubenswrapper[4727]: I0109 10:48:18.860539 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:18 crc kubenswrapper[4727]: I0109 10:48:18.860586 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:18 crc kubenswrapper[4727]: E0109 10:48:18.861895 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 09 10:48:18 crc kubenswrapper[4727]: E0109 10:48:18.862028 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 09 10:48:18 crc kubenswrapper[4727]: E0109 10:48:18.861706 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.860016 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.860079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.860145 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.860229 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862375 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862432 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862432 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862877 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.862887 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.863238 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.937403 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:20 crc kubenswrapper[4727]: E0109 10:48:20.937595 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:50:22.93756981 +0000 UTC m=+268.387474591 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.937655 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:20 crc kubenswrapper[4727]: I0109 10:48:20.944466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.038825 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.038895 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.038921 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.040031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.042099 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.042143 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.176486 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.190362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.196003 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:21 crc kubenswrapper[4727]: W0109 10:48:21.453613 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-72d42f9e1637af07f0eed0425fac7a6acb96012b987e52d927ceba95e71bf173 WatchSource:0}: Error finding container 72d42f9e1637af07f0eed0425fac7a6acb96012b987e52d927ceba95e71bf173: Status 404 returned error can't find the container with id 72d42f9e1637af07f0eed0425fac7a6acb96012b987e52d927ceba95e71bf173 Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.661729 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9038215f2a461aa23c14abe79af74b1e1ca6367c7d0bf500f4a12fff4b350c2d"} Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.661800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"72d42f9e1637af07f0eed0425fac7a6acb96012b987e52d927ceba95e71bf173"} Jan 09 10:48:21 crc kubenswrapper[4727]: I0109 10:48:21.662083 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:21 crc kubenswrapper[4727]: W0109 10:48:21.689001 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3b5334c364dc81eda388e958de5e5723b8ffff9dcf5ccf8f448cc96251649a34 WatchSource:0}: Error finding container 3b5334c364dc81eda388e958de5e5723b8ffff9dcf5ccf8f448cc96251649a34: Status 404 returned error can't find the container with id 3b5334c364dc81eda388e958de5e5723b8ffff9dcf5ccf8f448cc96251649a34 Jan 09 10:48:21 crc kubenswrapper[4727]: W0109 10:48:21.689350 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-57a0e859dfa984dfcf8cc224e7aac8bcae35787a492b5fc5310ef8541a50a8e4 WatchSource:0}: Error finding container 57a0e859dfa984dfcf8cc224e7aac8bcae35787a492b5fc5310ef8541a50a8e4: Status 404 returned error can't find the container with id 57a0e859dfa984dfcf8cc224e7aac8bcae35787a492b5fc5310ef8541a50a8e4 Jan 09 10:48:22 crc kubenswrapper[4727]: I0109 10:48:22.666999 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a4850ec6daf1ded4182b2f9b0755746e960aff24eb8c1697b770c06b36c95b3a"} Jan 09 10:48:22 crc kubenswrapper[4727]: I0109 10:48:22.667427 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3b5334c364dc81eda388e958de5e5723b8ffff9dcf5ccf8f448cc96251649a34"} Jan 09 10:48:22 crc kubenswrapper[4727]: I0109 10:48:22.668129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"02f65a319a9db16c5d996291602bfa37d8e2eae9f31f0f651142ed45782e92df"} Jan 09 10:48:22 crc kubenswrapper[4727]: I0109 10:48:22.668178 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"57a0e859dfa984dfcf8cc224e7aac8bcae35787a492b5fc5310ef8541a50a8e4"} Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.927992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.970666 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9b2sc"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.971267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.972130 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkdts"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.972672 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.972859 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.972921 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.973406 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5d9bz"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.974216 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.974258 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.974701 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.975181 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.975575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.976195 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.976495 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977088 4727 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977120 4727 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977142 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977157 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977198 4727 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977209 4727 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977228 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977210 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977242 4727 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: secrets "default-dockercfg-chnjx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977252 4727 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977264 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-chnjx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977285 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977296 4727 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977306 4727 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977321 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.977333 4727 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977346 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.977358 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.978444 4727 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.978475 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.978540 4727 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.978562 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.978631 4727 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.978655 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.978780 4727 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.978806 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979241 4727 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979272 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979359 4727 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979382 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979438 4727 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979454 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979564 4727 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979591 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979661 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979679 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979746 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979771 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979816 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979835 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979895 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979919 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.979977 4727 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.979994 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980041 4727 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980057 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980101 4727 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980119 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980167 4727 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980189 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980235 4727 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980252 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980296 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980312 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980358 4727 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980374 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: W0109 10:48:29.980593 4727 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: secrets "cluster-image-registry-operator-dockercfg-m4qtx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 09 10:48:29 crc kubenswrapper[4727]: E0109 10:48:29.980619 4727 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-image-registry-operator-dockercfg-m4qtx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.980796 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.981402 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.981736 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.981939 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.982362 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.982555 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.983175 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.986024 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.986698 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.987954 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.989614 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.992103 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c"] Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.993213 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:29 crc kubenswrapper[4727]: I0109 10:48:29.996964 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:29.997332 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:29.997772 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:29.998031 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.019477 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.019797 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.019984 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020100 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020288 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020415 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020549 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020671 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.020993 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.021154 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.021388 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.021495 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.021633 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.023775 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s9tfg"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.024326 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwvhd"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.025293 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.025791 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.027780 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.028282 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034536 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-config\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034590 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrvk\" (UniqueName: \"kubernetes.io/projected/fab289a6-8124-413b-88f7-0ef3e4523b94-kube-api-access-nhrvk\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034619 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3f932b-fb41-4a2b-967b-a15de9606cbd-serving-cert\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034639 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034659 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034683 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034703 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034718 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034732 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab289a6-8124-413b-88f7-0ef3e4523b94-serving-cert\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.034748 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fab289a6-8124-413b-88f7-0ef3e4523b94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.038543 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.038759 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041064 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041277 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041428 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041548 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041682 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.041967 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.042660 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.045605 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.045942 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046449 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046887 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046972 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.047120 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.047235 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.046891 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.047365 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.049927 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.055723 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.056410 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xj755"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.064880 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.065157 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.065521 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.065822 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.065858 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx72n"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.066002 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.056972 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.056716 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.075942 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.076673 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zcx2c"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.077302 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.077539 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.079948 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8lqcl"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.087281 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.090217 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.095772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.102952 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.128951 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.129102 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.129330 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.128949 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.129717 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.129754 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.142498 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.142947 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150572 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150626 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150655 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150672 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ppp\" (UniqueName: \"kubernetes.io/projected/7e76cc6a-976f-4e61-8829-bbf3c4313293-kube-api-access-w2ppp\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150711 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150727 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150744 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-client\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150777 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150796 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab289a6-8124-413b-88f7-0ef3e4523b94-serving-cert\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150836 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150855 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150872 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150904 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150922 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150946 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-dir\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150963 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.150980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzwq\" (UniqueName: \"kubernetes.io/projected/7604b799-797e-4127-84cf-3f7e1c17dc87-kube-api-access-pqzwq\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151001 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3f932b-fb41-4a2b-967b-a15de9606cbd-serving-cert\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151033 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8459883-ed7a-4108-8198-ee2fbd63e891-metrics-tls\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-policies\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnxm\" (UniqueName: \"kubernetes.io/projected/1d3f932b-fb41-4a2b-967b-a15de9606cbd-kube-api-access-8rnxm\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151100 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151133 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swb26\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-kube-api-access-swb26\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151154 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151168 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4bgm\" (UniqueName: \"kubernetes.io/projected/423f9db2-b3a1-406d-b906-bc4ba37fdb55-kube-api-access-f4bgm\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151226 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/33b90f5a-a103-48d8-9eb1-fd7a153250ac-kube-api-access-9qcj6\") pod \"downloads-7954f5f757-5d9bz\" (UID: \"33b90f5a-a103-48d8-9eb1-fd7a153250ac\") " pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151241 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151276 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151291 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fab289a6-8124-413b-88f7-0ef3e4523b94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151340 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllpk\" (UniqueName: \"kubernetes.io/projected/c999b3d9-4231-4163-821a-b759599c6510-kube-api-access-hllpk\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151356 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-config\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151413 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8l4f\" (UniqueName: \"kubernetes.io/projected/e8459883-ed7a-4108-8198-ee2fbd63e891-kube-api-access-z8l4f\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151433 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrvk\" (UniqueName: \"kubernetes.io/projected/fab289a6-8124-413b-88f7-0ef3e4523b94-kube-api-access-nhrvk\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztxj\" (UniqueName: \"kubernetes.io/projected/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-kube-api-access-mztxj\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151469 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-trusted-ca\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151489 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151527 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-encryption-config\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151545 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151564 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151605 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.151648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.154708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-serving-cert\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.155425 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.155878 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156027 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156199 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156351 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156487 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.156795 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157037 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157177 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fab289a6-8124-413b-88f7-0ef3e4523b94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157362 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.157586 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.158396 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.159239 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-config\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.162189 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.162282 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fab289a6-8124-413b-88f7-0ef3e4523b94-serving-cert\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.162747 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.162951 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163033 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163073 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163293 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163381 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163724 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163823 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163951 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.163985 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164070 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164075 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164133 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164263 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164399 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164490 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164753 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.164846 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.165051 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.165082 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.165587 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nz6pf"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.165927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.166560 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.166695 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.166807 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.168093 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25xhd"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.168891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d3f932b-fb41-4a2b-967b-a15de9606cbd-serving-cert\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.168970 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.169004 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.169421 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.169559 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.169568 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.170383 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.170441 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.170387 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.171097 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.171398 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.172999 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.177644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.179923 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.181029 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183021 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183306 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9b2sc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183346 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5d9bz"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183359 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183371 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s9tfg"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183395 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183408 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183419 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkdts"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183429 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ppcsh"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.183646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.184570 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.201496 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.204259 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.204268 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.207981 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.212360 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.217629 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xj755"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.220034 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.222574 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwvhd"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.224832 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx72n"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.228692 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.230489 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.231646 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.241050 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.244608 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.245884 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.247564 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.248958 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.250556 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8lqcl"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.251143 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.252306 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.252914 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.253366 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.254552 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllpk\" (UniqueName: \"kubernetes.io/projected/c999b3d9-4231-4163-821a-b759599c6510-kube-api-access-hllpk\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255785 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255764 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tvd7t"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.255813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8l4f\" (UniqueName: \"kubernetes.io/projected/e8459883-ed7a-4108-8198-ee2fbd63e891-kube-api-access-z8l4f\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256111 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztxj\" (UniqueName: \"kubernetes.io/projected/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-kube-api-access-mztxj\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256150 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccf4\" (UniqueName: \"kubernetes.io/projected/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-kube-api-access-jccf4\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256248 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-trusted-ca\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256280 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256306 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-encryption-config\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256331 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-stats-auth\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256404 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-metrics-certs\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256435 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256532 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-serving-cert\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256564 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256597 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256683 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256709 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ppp\" (UniqueName: \"kubernetes.io/projected/7e76cc6a-976f-4e61-8829-bbf3c4313293-kube-api-access-w2ppp\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256750 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-client\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dq98\" (UniqueName: \"kubernetes.io/projected/5789711a-8f11-41c1-ac8d-eb5e60d147a1-kube-api-access-9dq98\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256841 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256857 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256876 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256917 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-dir\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256978 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqzwq\" (UniqueName: \"kubernetes.io/projected/7604b799-797e-4127-84cf-3f7e1c17dc87-kube-api-access-pqzwq\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256985 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.256998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257017 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8459883-ed7a-4108-8198-ee2fbd63e891-metrics-tls\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-policies\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-proxy-tls\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257086 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnxm\" (UniqueName: \"kubernetes.io/projected/1d3f932b-fb41-4a2b-967b-a15de9606cbd-kube-api-access-8rnxm\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257118 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257138 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swb26\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-kube-api-access-swb26\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257157 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257174 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4bgm\" (UniqueName: \"kubernetes.io/projected/423f9db2-b3a1-406d-b906-bc4ba37fdb55-kube-api-access-f4bgm\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257212 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5789711a-8f11-41c1-ac8d-eb5e60d147a1-service-ca-bundle\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/33b90f5a-a103-48d8-9eb1-fd7a153250ac-kube-api-access-9qcj6\") pod \"downloads-7954f5f757-5d9bz\" (UID: \"33b90f5a-a103-48d8-9eb1-fd7a153250ac\") " pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257255 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.257324 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-default-certificate\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.258035 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d3f932b-fb41-4a2b-967b-a15de9606cbd-trusted-ca\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.258240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.258583 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ppcsh"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.259317 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.259978 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.260105 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.260137 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.260369 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.260606 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261075 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261145 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-policies\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261599 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7604b799-797e-4127-84cf-3f7e1c17dc87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261680 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7604b799-797e-4127-84cf-3f7e1c17dc87-audit-dir\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.261930 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-serving-cert\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.262049 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.262205 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.263003 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8459883-ed7a-4108-8198-ee2fbd63e891-metrics-tls\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.263385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.263575 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-encryption-config\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.264050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.264368 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7604b799-797e-4127-84cf-3f7e1c17dc87-etcd-client\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.264936 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nz6pf"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.266389 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.267857 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.269316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.270834 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tvd7t"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.272698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25xhd"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.273718 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xqcqv"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.273747 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.275304 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-99dfz"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.275537 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.276649 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.276663 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xqcqv"] Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.293832 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.315528 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.333946 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-stats-auth\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359062 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-metrics-certs\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359134 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359210 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dq98\" (UniqueName: \"kubernetes.io/projected/5789711a-8f11-41c1-ac8d-eb5e60d147a1-kube-api-access-9dq98\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359344 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-proxy-tls\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5789711a-8f11-41c1-ac8d-eb5e60d147a1-service-ca-bundle\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-default-certificate\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.359562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccf4\" (UniqueName: \"kubernetes.io/projected/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-kube-api-access-jccf4\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.361608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.362237 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.363803 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-proxy-tls\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.373343 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.384076 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-default-certificate\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.394717 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.401056 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5789711a-8f11-41c1-ac8d-eb5e60d147a1-service-ca-bundle\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.414756 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.422853 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-metrics-certs\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.433803 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.454139 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.491641 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrvk\" (UniqueName: \"kubernetes.io/projected/fab289a6-8124-413b-88f7-0ef3e4523b94-kube-api-access-nhrvk\") pod \"openshift-config-operator-7777fb866f-n4g9c\" (UID: \"fab289a6-8124-413b-88f7-0ef3e4523b94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.493963 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.513238 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.533163 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.554219 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.562346 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5789711a-8f11-41c1-ac8d-eb5e60d147a1-stats-auth\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.574026 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.614957 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.634014 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.654438 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.663492 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.673335 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.694312 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.714249 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.734449 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.753795 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.774723 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.794299 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.814552 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.833607 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.853977 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.873429 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.883876 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c"] Jan 09 10:48:30 crc kubenswrapper[4727]: W0109 10:48:30.892269 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfab289a6_8124_413b_88f7_0ef3e4523b94.slice/crio-9dbdcebd436d03b5db60d5cb6a366ff80446418ea05608800630be2d0940a279 WatchSource:0}: Error finding container 9dbdcebd436d03b5db60d5cb6a366ff80446418ea05608800630be2d0940a279: Status 404 returned error can't find the container with id 9dbdcebd436d03b5db60d5cb6a366ff80446418ea05608800630be2d0940a279 Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.894101 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.919771 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.933240 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.953924 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.973775 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 09 10:48:30 crc kubenswrapper[4727]: I0109 10:48:30.994770 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.013791 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.033421 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.053693 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.073930 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.094224 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.113662 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.134160 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.153525 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.156044 4727 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.156120 4727 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.156142 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config podName:b80bab42-ad32-4ec1-83c3-d939b007a97b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.656119359 +0000 UTC m=+157.106024140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config") pod "controller-manager-879f6c89f-75slj" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.156212 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles podName:b80bab42-ad32-4ec1-83c3-d939b007a97b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.656187051 +0000 UTC m=+157.106091832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles") pod "controller-manager-879f6c89f-75slj" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.172413 4727 request.go:700] Waited for 1.005317273s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.173835 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.194318 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.213806 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.233315 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.252460 4727 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.252503 4727 projected.go:194] Error preparing data for projected volume kube-api-access-vpmsk for pod openshift-controller-manager/controller-manager-879f6c89f-75slj: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.252585 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk podName:b80bab42-ad32-4ec1-83c3-d939b007a97b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.752562215 +0000 UTC m=+157.202466996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vpmsk" (UniqueName: "kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk") pod "controller-manager-879f6c89f-75slj" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.253647 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259275 4727 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259314 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.759304691 +0000 UTC m=+157.209209472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259277 4727 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259445 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.759437465 +0000 UTC m=+157.209342246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259610 4727 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259710 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.759700172 +0000 UTC m=+157.209604953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259742 4727 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.259887 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.759878937 +0000 UTC m=+157.209783718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260381 4727 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260421 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert podName:423f9db2-b3a1-406d-b906-bc4ba37fdb55 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760411873 +0000 UTC m=+157.210316654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-rbqsq" (UID: "423f9db2-b3a1-406d-b906-bc4ba37fdb55") : failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260433 4727 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260439 4727 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260465 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760455254 +0000 UTC m=+157.210360035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260479 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760470635 +0000 UTC m=+157.210375416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260481 4727 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260501 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760495006 +0000 UTC m=+157.210399787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260862 4727 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.260986 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config podName:423f9db2-b3a1-406d-b906-bc4ba37fdb55 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.760974589 +0000 UTC m=+157.210879360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config") pod "openshift-apiserver-operator-796bbdcf4f-rbqsq" (UID: "423f9db2-b3a1-406d-b906-bc4ba37fdb55") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.261023 4727 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.261146 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.761136564 +0000 UTC m=+157.211041345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.262665 4727 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.262698 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.762690539 +0000 UTC m=+157.212595320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.262779 4727 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: E0109 10:48:31.262985 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:31.762951767 +0000 UTC m=+157.212856548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync secret cache: timed out waiting for the condition Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.280079 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.293755 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.313045 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.333060 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.353482 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.374399 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.394632 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.414276 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.435367 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.454397 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.475134 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.494296 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.514135 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.534498 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.554185 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.574447 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.597324 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.614877 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.634469 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.653988 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.674877 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.677814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.678123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.694311 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.701225 4727 generic.go:334] "Generic (PLEG): container finished" podID="fab289a6-8124-413b-88f7-0ef3e4523b94" containerID="54c508419749042efdc1048c1d26247b608cb174bd851eb22ff5ca550efb8308" exitCode=0 Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.701351 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" event={"ID":"fab289a6-8124-413b-88f7-0ef3e4523b94","Type":"ContainerDied","Data":"54c508419749042efdc1048c1d26247b608cb174bd851eb22ff5ca550efb8308"} Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.701575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" event={"ID":"fab289a6-8124-413b-88f7-0ef3e4523b94","Type":"ContainerStarted","Data":"9dbdcebd436d03b5db60d5cb6a366ff80446418ea05608800630be2d0940a279"} Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.713433 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.734160 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.759977 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.773740 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779213 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779265 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779326 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779555 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779631 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779691 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779764 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779863 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.779941 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.794219 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.814102 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.833766 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.854461 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.874679 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.893073 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 09 10:48:31 crc kubenswrapper[4727]: I0109 10:48:31.971358 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8l4f\" (UniqueName: \"kubernetes.io/projected/e8459883-ed7a-4108-8198-ee2fbd63e891-kube-api-access-z8l4f\") pod \"dns-operator-744455d44c-xwvhd\" (UID: \"e8459883-ed7a-4108-8198-ee2fbd63e891\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.014364 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.034429 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.053451 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.060862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.073795 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.149725 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnxm\" (UniqueName: \"kubernetes.io/projected/1d3f932b-fb41-4a2b-967b-a15de9606cbd-kube-api-access-8rnxm\") pod \"console-operator-58897d9998-s9tfg\" (UID: \"1d3f932b-fb41-4a2b-967b-a15de9606cbd\") " pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.168312 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqzwq\" (UniqueName: \"kubernetes.io/projected/7604b799-797e-4127-84cf-3f7e1c17dc87-kube-api-access-pqzwq\") pod \"apiserver-7bbb656c7d-gqtf6\" (UID: \"7604b799-797e-4127-84cf-3f7e1c17dc87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.191841 4727 request.go:700] Waited for 1.931446613s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.210239 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.228907 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swb26\" (UniqueName: \"kubernetes.io/projected/fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b-kube-api-access-swb26\") pod \"cluster-image-registry-operator-dc59b4c8b-dwxl4\" (UID: \"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.248963 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") pod \"route-controller-manager-6576b87f9c-zrrcw\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.254303 4727 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.274576 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.294196 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.313360 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwvhd"] Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.314175 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.333424 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.345001 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.353911 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.353954 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.389913 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.401118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dq98\" (UniqueName: \"kubernetes.io/projected/5789711a-8f11-41c1-ac8d-eb5e60d147a1-kube-api-access-9dq98\") pod \"router-default-5444994796-zcx2c\" (UID: \"5789711a-8f11-41c1-ac8d-eb5e60d147a1\") " pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.413187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccf4\" (UniqueName: \"kubernetes.io/projected/16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d-kube-api-access-jccf4\") pod \"machine-config-controller-84d6567774-xj755\" (UID: \"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.433936 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.454092 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.477497 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.481121 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.487997 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/423f9db2-b3a1-406d-b906-bc4ba37fdb55-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.492185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.492234 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.492257 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e375e91d-f60e-4b86-87ee-a043c2b81128-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.492285 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.493827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496081 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-image-import-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496951 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-serving-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.496968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit-dir\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0621386-4e3b-422a-93db-adcd616daa7a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497047 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-node-pullsecrets\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-encryption-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497110 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57n4w\" (UniqueName: \"kubernetes.io/projected/096c2622-3648-4579-8139-9d3a8d4a9006-kube-api-access-57n4w\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497132 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497149 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/096c2622-3648-4579-8139-9d3a8d4a9006-proxy-tls\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497187 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497439 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46h56\" (UniqueName: \"kubernetes.io/projected/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-kube-api-access-46h56\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497466 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497529 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dmmg\" (UniqueName: \"kubernetes.io/projected/15a46c73-a8f2-427f-a701-01ccad52c6a1-kube-api-access-6dmmg\") pod \"migrator-59844c95c7-wxzs5\" (UID: \"15a46c73-a8f2-427f-a701-01ccad52c6a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.497558 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498037 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498129 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498426 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498827 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.498915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-client\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.499008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.499327 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.499372 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509156 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvzp\" (UniqueName: \"kubernetes.io/projected/198987e6-b5aa-4331-9e5e-4a51a02ab712-kube-api-access-rbvzp\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509321 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl857\" (UniqueName: \"kubernetes.io/projected/e375e91d-f60e-4b86-87ee-a043c2b81128-kube-api-access-wl857\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509400 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.509677 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.510923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.511197 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.011168245 +0000 UTC m=+158.461073016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.511579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-serving-cert\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512118 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh4rg\" (UniqueName: \"kubernetes.io/projected/e0621386-4e3b-422a-93db-adcd616daa7a-kube-api-access-gh4rg\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512214 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e375e91d-f60e-4b86-87ee-a043c2b81128-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512301 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.512323 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-images\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.513067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.515765 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.548713 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.553078 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.554308 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.574655 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.594311 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.610931 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.614309 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.614329 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.614597 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.114545563 +0000 UTC m=+158.564450354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.614751 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztxj\" (UniqueName: \"kubernetes.io/projected/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-kube-api-access-mztxj\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615314 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-plugins-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615448 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46h56\" (UniqueName: \"kubernetes.io/projected/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-kube-api-access-46h56\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.615810 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.616161 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.616307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dmmg\" (UniqueName: \"kubernetes.io/projected/15a46c73-a8f2-427f-a701-01ccad52c6a1-kube-api-access-6dmmg\") pod \"migrator-59844c95c7-wxzs5\" (UID: \"15a46c73-a8f2-427f-a701-01ccad52c6a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.616812 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617347 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lqmc\" (UniqueName: \"kubernetes.io/projected/aa62f546-f6a1-46e8-9023-482a9e2e04b6-kube-api-access-8lqmc\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617437 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6l2v\" (UniqueName: \"kubernetes.io/projected/ea45a4de-3e71-4605-b02d-258b9dbb544c-kube-api-access-d6l2v\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617479 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-node-bootstrap-token\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617609 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617661 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617693 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-config\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617715 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-key\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617733 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-srv-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617790 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617838 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617857 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-srv-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617874 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4lmh\" (UniqueName: \"kubernetes.io/projected/27d5037e-e25b-4865-a1fe-7d165be1bf23-kube-api-access-p4lmh\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617940 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.617961 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2640d0ff-e8c2-4795-bf96-9b862e10de22-config\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.618017 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbvzp\" (UniqueName: \"kubernetes.io/projected/198987e6-b5aa-4331-9e5e-4a51a02ab712-kube-api-access-rbvzp\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.618047 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzd77\" (UniqueName: \"kubernetes.io/projected/879d1222-addb-406a-b8fd-3ce4068c1d08-kube-api-access-fzd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.618640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619129 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619737 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbc98\" (UniqueName: \"kubernetes.io/projected/402cb251-6fda-417f-a9bf-30b59833a3cd-kube-api-access-rbc98\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619877 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-certs\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619903 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-cabundle\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-serving-cert\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619964 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27d5037e-e25b-4865-a1fe-7d165be1bf23-config-volume\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.619997 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh4rg\" (UniqueName: \"kubernetes.io/projected/e0621386-4e3b-422a-93db-adcd616daa7a-kube-api-access-gh4rg\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620068 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-images\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620094 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa62f546-f6a1-46e8-9023-482a9e2e04b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620121 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-registration-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620161 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27d5037e-e25b-4865-a1fe-7d165be1bf23-metrics-tls\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgq6\" (UniqueName: \"kubernetes.io/projected/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-kube-api-access-8xgq6\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620246 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-client\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620265 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620296 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0621386-4e3b-422a-93db-adcd616daa7a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620315 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh47w\" (UniqueName: \"kubernetes.io/projected/50dba57c-02ba-4204-a8d0-6f95ffed6db7-kube-api-access-sh47w\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620333 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-webhook-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620348 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-mountpoint-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57n4w\" (UniqueName: \"kubernetes.io/projected/096c2622-3648-4579-8139-9d3a8d4a9006-kube-api-access-57n4w\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620395 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620420 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/096c2622-3648-4579-8139-9d3a8d4a9006-proxy-tls\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620438 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-csi-data-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620460 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hw9\" (UniqueName: \"kubernetes.io/projected/2640d0ff-e8c2-4795-bf96-9b862e10de22-kube-api-access-k9hw9\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620498 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.620552 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/879d1222-addb-406a-b8fd-3ce4068c1d08-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621259 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621282 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621379 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621434 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621524 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621543 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.622873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.622242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.622367 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.622401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623217 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623335 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2640d0ff-e8c2-4795-bf96-9b862e10de22-serving-cert\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623373 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-client\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623435 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szblf\" (UniqueName: \"kubernetes.io/projected/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-kube-api-access-szblf\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623457 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-tmpfs\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623542 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsq5\" (UniqueName: \"kubernetes.io/projected/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-kube-api-access-5qsq5\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623974 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.624418 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.621677 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.625212 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c2622-3648-4579-8139-9d3a8d4a9006-images\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.625566 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.125541852 +0000 UTC m=+158.575446794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.625759 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.625773 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c999b3d9-4231-4163-821a-b759599c6510-machine-approver-tls\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.625975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.623632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626233 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl857\" (UniqueName: \"kubernetes.io/projected/e375e91d-f60e-4b86-87ee-a043c2b81128-kube-api-access-wl857\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqg9t\" (UniqueName: \"kubernetes.io/projected/8674271c-47a7-4722-9ceb-76e787b31485-kube-api-access-xqg9t\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626385 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e375e91d-f60e-4b86-87ee-a043c2b81128-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626433 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtb92\" (UniqueName: \"kubernetes.io/projected/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-kube-api-access-gtb92\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626499 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-serving-cert\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626698 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626755 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4qzn\" (UniqueName: \"kubernetes.io/projected/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-kube-api-access-d4qzn\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626789 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e375e91d-f60e-4b86-87ee-a043c2b81128-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626826 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-service-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-config\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626884 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-socket-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626910 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626932 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-image-import-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626948 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626968 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-serving-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.626995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit-dir\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.627628 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.628010 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0621386-4e3b-422a-93db-adcd616daa7a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.637334 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/096c2622-3648-4579-8139-9d3a8d4a9006-proxy-tls\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.638186 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e375e91d-f60e-4b86-87ee-a043c2b81128-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.639525 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.640079 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-client\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.640091 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-serving-cert\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.640267 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.641869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.643940 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.644656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-audit-dir\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645028 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-node-pullsecrets\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-encryption-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645582 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/198987e6-b5aa-4331-9e5e-4a51a02ab712-node-pullsecrets\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645622 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645747 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxb5\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-kube-api-access-tvxb5\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8674271c-47a7-4722-9ceb-76e787b31485-cert\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645837 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.645879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-config\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.651411 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.651614 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s9tfg"] Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.652820 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.653297 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.656631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-etcd-serving-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.657400 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.658000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e375e91d-f60e-4b86-87ee-a043c2b81128-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.658069 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.658408 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.659191 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/198987e6-b5aa-4331-9e5e-4a51a02ab712-image-import-ca\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.659953 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ppp\" (UniqueName: \"kubernetes.io/projected/7e76cc6a-976f-4e61-8829-bbf3c4313293-kube-api-access-w2ppp\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.660338 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.660425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.662668 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/198987e6-b5aa-4331-9e5e-4a51a02ab712-encryption-config\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:32 crc kubenswrapper[4727]: W0109 10:48:32.667133 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3f932b_fb41_4a2b_967b_a15de9606cbd.slice/crio-1d65011713b801bb0a4577783eee5f4d9353a01b5db52999118ba47182e2867c WatchSource:0}: Error finding container 1d65011713b801bb0a4577783eee5f4d9353a01b5db52999118ba47182e2867c: Status 404 returned error can't find the container with id 1d65011713b801bb0a4577783eee5f4d9353a01b5db52999118ba47182e2867c Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.674421 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.679439 4727 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.679604 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles podName:b80bab42-ad32-4ec1-83c3-d939b007a97b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.679578784 +0000 UTC m=+159.129483565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles") pod "controller-manager-879f6c89f-75slj" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.685538 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.694606 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.696138 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.714020 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.717313 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6"] Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.718690 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.721210 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" event={"ID":"fab289a6-8124-413b-88f7-0ef3e4523b94","Type":"ContainerStarted","Data":"00c9abd5a627f7cbe2dec5c6f0e47f428f7df9864428e7a91b37a9fe46d111dc"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.721640 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.723314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" event={"ID":"e8459883-ed7a-4108-8198-ee2fbd63e891","Type":"ContainerStarted","Data":"81d7f870b23c46598d90f879b8747361b96a7182a6c7b06a1380f8c2775bee8a"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.723360 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" event={"ID":"e8459883-ed7a-4108-8198-ee2fbd63e891","Type":"ContainerStarted","Data":"c5bf64f73c69309b7c62b5a733aa637377bf02397c8db3c9a8c81adc87eec1be"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.724937 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" event={"ID":"85ff3ef7-a005-4881-9004-73bc686b4aae","Type":"ContainerStarted","Data":"b91fc4ab06ef577d9c4e0fad8710798e885460e768b3d9d37cb5205f9fe286fa"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.726069 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" event={"ID":"1d3f932b-fb41-4a2b-967b-a15de9606cbd","Type":"ContainerStarted","Data":"1d65011713b801bb0a4577783eee5f4d9353a01b5db52999118ba47182e2867c"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.728547 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zcx2c" event={"ID":"5789711a-8f11-41c1-ac8d-eb5e60d147a1","Type":"ContainerStarted","Data":"c077485a0a1a3d46e50df5741e29227c5c48f6004b354b76516d54bc4c53ebd2"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.728581 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zcx2c" event={"ID":"5789711a-8f11-41c1-ac8d-eb5e60d147a1","Type":"ContainerStarted","Data":"44a05262b0d4443bc5c637943cdd990fe1d88cb57872329b1d23084ebccf53f1"} Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.735287 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 09 10:48:32 crc kubenswrapper[4727]: W0109 10:48:32.739320 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7604b799_797e_4127_84cf_3f7e1c17dc87.slice/crio-f1bb4d7dea37e80b3e66934e730477dbef9d7b4cc672a2a76e686391696efc55 WatchSource:0}: Error finding container f1bb4d7dea37e80b3e66934e730477dbef9d7b4cc672a2a76e686391696efc55: Status 404 returned error can't find the container with id f1bb4d7dea37e80b3e66934e730477dbef9d7b4cc672a2a76e686391696efc55 Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.746866 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.746957 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.747330 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.247283254 +0000 UTC m=+158.697188035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747514 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-config\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-socket-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747650 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747681 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxb5\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-kube-api-access-tvxb5\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8674271c-47a7-4722-9ceb-76e787b31485-cert\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747756 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-config\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-plugins-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747859 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747912 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747933 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.747992 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lqmc\" (UniqueName: \"kubernetes.io/projected/aa62f546-f6a1-46e8-9023-482a9e2e04b6-kube-api-access-8lqmc\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6l2v\" (UniqueName: \"kubernetes.io/projected/ea45a4de-3e71-4605-b02d-258b9dbb544c-kube-api-access-d6l2v\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748084 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-node-bootstrap-token\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748145 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748168 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-config\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748213 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-key\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748236 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-srv-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748257 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-srv-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2640d0ff-e8c2-4795-bf96-9b862e10de22-config\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748517 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4lmh\" (UniqueName: \"kubernetes.io/projected/27d5037e-e25b-4865-a1fe-7d165be1bf23-kube-api-access-p4lmh\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748557 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzd77\" (UniqueName: \"kubernetes.io/projected/879d1222-addb-406a-b8fd-3ce4068c1d08-kube-api-access-fzd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbc98\" (UniqueName: \"kubernetes.io/projected/402cb251-6fda-417f-a9bf-30b59833a3cd-kube-api-access-rbc98\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748630 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-certs\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748710 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-cabundle\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748741 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27d5037e-e25b-4865-a1fe-7d165be1bf23-config-volume\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa62f546-f6a1-46e8-9023-482a9e2e04b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748796 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-registration-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27d5037e-e25b-4865-a1fe-7d165be1bf23-metrics-tls\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748861 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748882 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgq6\" (UniqueName: \"kubernetes.io/projected/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-kube-api-access-8xgq6\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748902 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-client\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748956 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-config\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748974 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh47w\" (UniqueName: \"kubernetes.io/projected/50dba57c-02ba-4204-a8d0-6f95ffed6db7-kube-api-access-sh47w\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.748999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749062 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-webhook-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749108 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-plugins-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749149 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-mountpoint-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749177 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-csi-data-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749277 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9hw9\" (UniqueName: \"kubernetes.io/projected/2640d0ff-e8c2-4795-bf96-9b862e10de22-kube-api-access-k9hw9\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749367 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/879d1222-addb-406a-b8fd-3ce4068c1d08-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-socket-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749608 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2640d0ff-e8c2-4795-bf96-9b862e10de22-serving-cert\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749707 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749837 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szblf\" (UniqueName: \"kubernetes.io/projected/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-kube-api-access-szblf\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-tmpfs\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749902 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qsq5\" (UniqueName: \"kubernetes.io/projected/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-kube-api-access-5qsq5\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.749995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqg9t\" (UniqueName: \"kubernetes.io/projected/8674271c-47a7-4722-9ceb-76e787b31485-kube-api-access-xqg9t\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750022 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750049 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtb92\" (UniqueName: \"kubernetes.io/projected/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-kube-api-access-gtb92\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-serving-cert\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750118 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4qzn\" (UniqueName: \"kubernetes.io/projected/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-kube-api-access-d4qzn\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-service-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750729 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-config\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.750953 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-service-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.751030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-mountpoint-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.751107 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-csi-data-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.752882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-tmpfs\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.753031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-cabundle\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.753618 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27d5037e-e25b-4865-a1fe-7d165be1bf23-config-volume\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.754223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.754800 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.754797 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.254780362 +0000 UTC m=+158.704685143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.755102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.755418 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-registration-dir\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.755561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8674271c-47a7-4722-9ceb-76e787b31485-cert\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.755756 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-config\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.757085 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-webhook-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.757118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa62f546-f6a1-46e8-9023-482a9e2e04b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.758470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-certs\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.759068 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.759693 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2640d0ff-e8c2-4795-bf96-9b862e10de22-config\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.760002 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/879d1222-addb-406a-b8fd-3ce4068c1d08-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.760479 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-ca\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.760660 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.760654 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-metrics-tls\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.761285 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.758450 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.761437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-trusted-ca\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.761751 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.762757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.764849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/27d5037e-e25b-4865-a1fe-7d165be1bf23-metrics-tls\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.765002 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ea45a4de-3e71-4605-b02d-258b9dbb544c-node-bootstrap-token\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.766778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-srv-cert\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.769951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50dba57c-02ba-4204-a8d0-6f95ffed6db7-srv-cert\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.769953 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.769966 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e76cc6a-976f-4e61-8829-bbf3c4313293-serving-cert\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.770019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-etcd-client\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.770072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-signing-key\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.775161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.775762 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.777063 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402cb251-6fda-417f-a9bf-30b59833a3cd-serving-cert\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.778122 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2640d0ff-e8c2-4795-bf96-9b862e10de22-serving-cert\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.778389 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.780056 4727 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.780138 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.780114798 +0000 UTC m=+159.230019579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781155 4727 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781220 4727 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781311 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config podName:423f9db2-b3a1-406d-b906-bc4ba37fdb55 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.78118321 +0000 UTC m=+159.231087991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config") pod "openshift-apiserver-operator-796bbdcf4f-rbqsq" (UID: "423f9db2-b3a1-406d-b906-bc4ba37fdb55") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781485 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.781331774 +0000 UTC m=+159.231236735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781811 4727 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781924 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config podName:7e76cc6a-976f-4e61-8829-bbf3c4313293 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.78190105 +0000 UTC m=+159.231805831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config") pod "authentication-operator-69f744f599-mkdts" (UID: "7e76cc6a-976f-4e61-8829-bbf3c4313293") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781809 4727 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.782252 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config podName:c999b3d9-4231-4163-821a-b759599c6510 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.782234911 +0000 UTC m=+159.232139702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config") pod "machine-approver-56656f9798-9zbmm" (UID: "c999b3d9-4231-4163-821a-b759599c6510") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.781577 4727 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.782542 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images podName:ff5b64d7-46ec-4f56-a044-4b57c96ebc03 nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.782530739 +0000 UTC m=+159.232435710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images") pod "machine-api-operator-5694c8668f-9b2sc" (UID: "ff5b64d7-46ec-4f56-a044-4b57c96ebc03") : failed to sync configmap cache: timed out waiting for the condition Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.781959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.803176 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.816698 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.841109 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.852851 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.853690 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.353652818 +0000 UTC m=+158.803557599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.854165 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.854742 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.354733989 +0000 UTC m=+158.804638760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.862131 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.878618 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.899282 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.915959 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.934180 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.948260 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4bgm\" (UniqueName: \"kubernetes.io/projected/423f9db2-b3a1-406d-b906-bc4ba37fdb55-kube-api-access-f4bgm\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.954646 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.958243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:32 crc kubenswrapper[4727]: E0109 10:48:32.961159 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.461112503 +0000 UTC m=+158.911017284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.965651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllpk\" (UniqueName: \"kubernetes.io/projected/c999b3d9-4231-4163-821a-b759599c6510-kube-api-access-hllpk\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.975914 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.984748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qcj6\" (UniqueName: \"kubernetes.io/projected/33b90f5a-a103-48d8-9eb1-fd7a153250ac-kube-api-access-9qcj6\") pod \"downloads-7954f5f757-5d9bz\" (UID: \"33b90f5a-a103-48d8-9eb1-fd7a153250ac\") " pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:32 crc kubenswrapper[4727]: I0109 10:48:32.994521 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:32.998307 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") pod \"console-f9d7485db-pjc7c\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.055831 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46h56\" (UniqueName: \"kubernetes.io/projected/d3ee2782-e2b4-41bf-8633-000ccd1fb4d2-kube-api-access-46h56\") pod \"multus-admission-controller-857f4d67dd-fx72n\" (UID: \"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.060276 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.060817 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.560791733 +0000 UTC m=+159.010696514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.059249 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xj755"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.096042 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dmmg\" (UniqueName: \"kubernetes.io/projected/15a46c73-a8f2-427f-a701-01ccad52c6a1-kube-api-access-6dmmg\") pod \"migrator-59844c95c7-wxzs5\" (UID: \"15a46c73-a8f2-427f-a701-01ccad52c6a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.105766 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") pod \"oauth-openshift-558db77b4-ldkw8\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.110646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.114318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbvzp\" (UniqueName: \"kubernetes.io/projected/198987e6-b5aa-4331-9e5e-4a51a02ab712-kube-api-access-rbvzp\") pod \"apiserver-76f77b778f-8lqcl\" (UID: \"198987e6-b5aa-4331-9e5e-4a51a02ab712\") " pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.130402 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh4rg\" (UniqueName: \"kubernetes.io/projected/e0621386-4e3b-422a-93db-adcd616daa7a-kube-api-access-gh4rg\") pod \"cluster-samples-operator-665b6dd947-pk2gc\" (UID: \"e0621386-4e3b-422a-93db-adcd616daa7a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.130800 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.131774 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.153912 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") pod \"marketplace-operator-79b997595-vlqcc\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.160797 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.162183 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.162402 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.662364947 +0000 UTC m=+159.112269728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.162809 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.163355 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.663348896 +0000 UTC m=+159.113253677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.173894 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57n4w\" (UniqueName: \"kubernetes.io/projected/096c2622-3648-4579-8139-9d3a8d4a9006-kube-api-access-57n4w\") pod \"machine-config-operator-74547568cd-tszhc\" (UID: \"096c2622-3648-4579-8139-9d3a8d4a9006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.189642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl857\" (UniqueName: \"kubernetes.io/projected/e375e91d-f60e-4b86-87ee-a043c2b81128-kube-api-access-wl857\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrfkk\" (UID: \"e375e91d-f60e-4b86-87ee-a043c2b81128\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.215016 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.230652 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.234277 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.263734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.264443 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.764414566 +0000 UTC m=+159.214319347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.269081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6l2v\" (UniqueName: \"kubernetes.io/projected/ea45a4de-3e71-4605-b02d-258b9dbb544c-kube-api-access-d6l2v\") pod \"machine-config-server-99dfz\" (UID: \"ea45a4de-3e71-4605-b02d-258b9dbb544c\") " pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.279077 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxb5\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-kube-api-access-tvxb5\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.299209 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbc98\" (UniqueName: \"kubernetes.io/projected/402cb251-6fda-417f-a9bf-30b59833a3cd-kube-api-access-rbc98\") pod \"etcd-operator-b45778765-25xhd\" (UID: \"402cb251-6fda-417f-a9bf-30b59833a3cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.305226 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.308792 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-99dfz" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.310551 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9hw9\" (UniqueName: \"kubernetes.io/projected/2640d0ff-e8c2-4795-bf96-9b862e10de22-kube-api-access-k9hw9\") pod \"service-ca-operator-777779d784-gnwbx\" (UID: \"2640d0ff-e8c2-4795-bf96-9b862e10de22\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.312837 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.322726 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.335677 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e3b3a7a-6c2e-4bb5-8768-be94244740aa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7lhv\" (UID: \"8e3b3a7a-6c2e-4bb5-8768-be94244740aa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.343021 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.351206 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.351555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szblf\" (UniqueName: \"kubernetes.io/projected/cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6-kube-api-access-szblf\") pod \"kube-storage-version-migrator-operator-b67b599dd-5b5mt\" (UID: \"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.368456 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.368990 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.868966328 +0000 UTC m=+159.318871109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.382975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") pod \"collect-profiles-29465925-66zzw\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.392847 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.399673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqg9t\" (UniqueName: \"kubernetes.io/projected/8674271c-47a7-4722-9ceb-76e787b31485-kube-api-access-xqg9t\") pod \"ingress-canary-tvd7t\" (UID: \"8674271c-47a7-4722-9ceb-76e787b31485\") " pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.419902 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qsq5\" (UniqueName: \"kubernetes.io/projected/414cbbdd-31b2-4eae-84a7-33cd1a4961b5-kube-api-access-5qsq5\") pod \"csi-hostpathplugin-xqcqv\" (UID: \"414cbbdd-31b2-4eae-84a7-33cd1a4961b5\") " pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.435179 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtb92\" (UniqueName: \"kubernetes.io/projected/f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f-kube-api-access-gtb92\") pod \"packageserver-d55dfcdfc-lkqbn\" (UID: \"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.465142 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lqmc\" (UniqueName: \"kubernetes.io/projected/aa62f546-f6a1-46e8-9023-482a9e2e04b6-kube-api-access-8lqmc\") pod \"package-server-manager-789f6589d5-7ll84\" (UID: \"aa62f546-f6a1-46e8-9023-482a9e2e04b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.466968 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.469062 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.469552 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:33.969532062 +0000 UTC m=+159.419436843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.471625 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8lqcl"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.476757 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.481738 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.483311 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.492700 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.492947 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76c2db54-b4ef-4798-ac0e-4bdeaa6053f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2m9hx\" (UID: \"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.494981 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:33 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:33 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:33 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.495031 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.506705 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4qzn\" (UniqueName: \"kubernetes.io/projected/cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c-kube-api-access-d4qzn\") pod \"olm-operator-6b444d44fb-xs5vp\" (UID: \"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.511377 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a8e8d16-796c-4b3e-a29c-c5356e7dde5e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tfrb7\" (UID: \"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.515910 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.524602 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.535422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.553566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh47w\" (UniqueName: \"kubernetes.io/projected/50dba57c-02ba-4204-a8d0-6f95ffed6db7-kube-api-access-sh47w\") pod \"catalog-operator-68c6474976-jtjg7\" (UID: \"50dba57c-02ba-4204-a8d0-6f95ffed6db7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.554914 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.569922 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tvd7t" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.571751 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.572373 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.072355904 +0000 UTC m=+159.522260685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.599687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgq6\" (UniqueName: \"kubernetes.io/projected/be8a84bb-6eb3-4f11-8730-1bcb378cafa9-kube-api-access-8xgq6\") pod \"service-ca-9c57cc56f-nz6pf\" (UID: \"be8a84bb-6eb3-4f11-8730-1bcb378cafa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.600024 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.607184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe3c54e0-1aca-48bf-a737-cdb8c507f66d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d2jb6\" (UID: \"fe3c54e0-1aca-48bf-a737-cdb8c507f66d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.614254 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5d9bz"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.622725 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzd77\" (UniqueName: \"kubernetes.io/projected/879d1222-addb-406a-b8fd-3ce4068c1d08-kube-api-access-fzd77\") pod \"control-plane-machine-set-operator-78cbb6b69f-w6pvx\" (UID: \"879d1222-addb-406a-b8fd-3ce4068c1d08\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.631285 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4lmh\" (UniqueName: \"kubernetes.io/projected/27d5037e-e25b-4865-a1fe-7d165be1bf23-kube-api-access-p4lmh\") pod \"dns-default-ppcsh\" (UID: \"27d5037e-e25b-4865-a1fe-7d165be1bf23\") " pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.672494 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.672977 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.172940639 +0000 UTC m=+159.622845420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.717853 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.747137 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.752064 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.772962 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.774844 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.774931 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.775428 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.275389609 +0000 UTC m=+159.725294390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.779174 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-75slj\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.793868 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.807413 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" event={"ID":"85ff3ef7-a005-4881-9004-73bc686b4aae","Type":"ContainerStarted","Data":"7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.809197 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.809905 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.848904 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.865816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.867954 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.876844 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" event={"ID":"1d3f932b-fb41-4a2b-967b-a15de9606cbd","Type":"ContainerStarted","Data":"377dda43b2c98fed70c98b2ae4b706aba171eb66a0681a3802669479c5019605"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.877367 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.877662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878430 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878820 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.878987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.879881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-auth-proxy-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.880302 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c999b3d9-4231-4163-821a-b759599c6510-config\") pod \"machine-approver-56656f9798-9zbmm\" (UID: \"c999b3d9-4231-4163-821a-b759599c6510\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.880781 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.380721753 +0000 UTC m=+159.830626534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.881359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e76cc6a-976f-4e61-8829-bbf3c4313293-config\") pod \"authentication-operator-69f744f599-mkdts\" (UID: \"7e76cc6a-976f-4e61-8829-bbf3c4313293\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.882242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/423f9db2-b3a1-406d-b906-bc4ba37fdb55-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rbqsq\" (UID: \"423f9db2-b3a1-406d-b906-bc4ba37fdb55\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.882326 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-images\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.885880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5b64d7-46ec-4f56-a044-4b57c96ebc03-config\") pod \"machine-api-operator-5694c8668f-9b2sc\" (UID: \"ff5b64d7-46ec-4f56-a044-4b57c96ebc03\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.892804 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.904637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-99dfz" event={"ID":"ea45a4de-3e71-4605-b02d-258b9dbb544c","Type":"ContainerStarted","Data":"f4db1b45ec2e457b7a2ff56b91950d8cd66199b63cc6ed9895ba28a908c491fb"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.904812 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.910139 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" event={"ID":"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d","Type":"ContainerStarted","Data":"3086ef3ded19987e359151417f1b56f20e76fe1a2c88e5862198ed710decfc2d"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.910198 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" event={"ID":"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d","Type":"ContainerStarted","Data":"2e8fc798a88b6d1d25186e12ba2db436e5433f254c2495c278f4ec1233609749"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.911862 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" event={"ID":"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b","Type":"ContainerStarted","Data":"37d8ae718f41f6a0950faffa848c62cc06c3dfeac506e3c9b7008cd17392904b"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.911889 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" event={"ID":"fe298a1a-a64b-4d9a-9fd8-0dce96af8d1b","Type":"ContainerStarted","Data":"529eefbe22c8dec19bde16436ca00af1e43b78c93a59cb01d933ed244b485005"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.916956 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" event={"ID":"15a46c73-a8f2-427f-a701-01ccad52c6a1","Type":"ContainerStarted","Data":"14b233435448ffa1ee59174043b0d26cfc4e75dca3d643d36c17c59d83d7a105"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.923657 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" event={"ID":"e8459883-ed7a-4108-8198-ee2fbd63e891","Type":"ContainerStarted","Data":"b39a61d49e2f5a9e8994af8e26be433519a5e3071e6951a060ce0c7abd5b818f"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.929621 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.930727 4727 generic.go:334] "Generic (PLEG): container finished" podID="7604b799-797e-4127-84cf-3f7e1c17dc87" containerID="237add48c3f106ae9133276b7ce2295893b915d817989efdff99ba5581e326dc" exitCode=0 Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.930821 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" event={"ID":"7604b799-797e-4127-84cf-3f7e1c17dc87","Type":"ContainerDied","Data":"237add48c3f106ae9133276b7ce2295893b915d817989efdff99ba5581e326dc"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.930874 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" event={"ID":"7604b799-797e-4127-84cf-3f7e1c17dc87","Type":"ContainerStarted","Data":"f1bb4d7dea37e80b3e66934e730477dbef9d7b4cc672a2a76e686391696efc55"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.931917 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" event={"ID":"198987e6-b5aa-4331-9e5e-4a51a02ab712","Type":"ContainerStarted","Data":"1e7988cfc3c9b4199125fca59cb133b0affc6fe32e3a90ef973bd39d5ee4a2bc"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.932792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5d9bz" event={"ID":"33b90f5a-a103-48d8-9eb1-fd7a153250ac","Type":"ContainerStarted","Data":"2933be720f5eafd11602af7494a86ab36b4f368c2d2223bb17bd5a9a8a9f19c1"} Jan 09 10:48:33 crc kubenswrapper[4727]: I0109 10:48:33.980103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:33 crc kubenswrapper[4727]: E0109 10:48:33.984278 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.484258235 +0000 UTC m=+159.934163016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.002927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.082433 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.084099 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.584071408 +0000 UTC m=+160.033976189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.102601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.108795 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.184687 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.185415 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.685395616 +0000 UTC m=+160.135300397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.287024 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.288862 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.788835335 +0000 UTC m=+160.238740126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.389343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.389778 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.88973812 +0000 UTC m=+160.339642891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: W0109 10:48:34.395656 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc999b3d9_4231_4163_821a_b759599c6510.slice/crio-3d60c2f5f6053877f0515cfcdc7ec718d9a6e50ee6abfec8eba5eadc5f264f63 WatchSource:0}: Error finding container 3d60c2f5f6053877f0515cfcdc7ec718d9a6e50ee6abfec8eba5eadc5f264f63: Status 404 returned error can't find the container with id 3d60c2f5f6053877f0515cfcdc7ec718d9a6e50ee6abfec8eba5eadc5f264f63 Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.488310 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:34 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:34 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:34 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.488370 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.492173 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.492572 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:34.99255473 +0000 UTC m=+160.442459511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.515055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.556171 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-s9tfg" podStartSLOduration=140.55613565 podStartE2EDuration="2m20.55613565s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.556034596 +0000 UTC m=+160.005939397" watchObservedRunningTime="2026-01-09 10:48:34.55613565 +0000 UTC m=+160.006040451" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.607592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.607986 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.107967988 +0000 UTC m=+160.557872779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.709309 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.709813 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.209784839 +0000 UTC m=+160.659689620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.713296 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" podStartSLOduration=140.713265481 podStartE2EDuration="2m20.713265481s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.682047122 +0000 UTC m=+160.131951923" watchObservedRunningTime="2026-01-09 10:48:34.713265481 +0000 UTC m=+160.163170282" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.815778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.816810 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.316777892 +0000 UTC m=+160.766682673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.821267 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dwxl4" podStartSLOduration=140.821212721 podStartE2EDuration="2m20.821212721s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.76482358 +0000 UTC m=+160.214728371" watchObservedRunningTime="2026-01-09 10:48:34.821212721 +0000 UTC m=+160.271117502" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.822306 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" podStartSLOduration=141.822290982 podStartE2EDuration="2m21.822290982s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.794217086 +0000 UTC m=+160.244121867" watchObservedRunningTime="2026-01-09 10:48:34.822290982 +0000 UTC m=+160.272195763" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.833673 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xwvhd" podStartSLOduration=140.833652962 podStartE2EDuration="2m20.833652962s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:34.832606712 +0000 UTC m=+160.282511483" watchObservedRunningTime="2026-01-09 10:48:34.833652962 +0000 UTC m=+160.283557743" Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.929307 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:34 crc kubenswrapper[4727]: E0109 10:48:34.929832 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.429808759 +0000 UTC m=+160.879713540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:34 crc kubenswrapper[4727]: I0109 10:48:34.976976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pjc7c" event={"ID":"bab7ad75-cb15-4910-a013-e9cafba90f73","Type":"ContainerStarted","Data":"929125b8b64331d2d6d391ab423a97e682d7d12d88e3ecc772238a6afa971136"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.023418 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-99dfz" event={"ID":"ea45a4de-3e71-4605-b02d-258b9dbb544c","Type":"ContainerStarted","Data":"3ad3a1c7695129aa8d8ced3159c1d2b7d82ad6ef03c33d7264b30a28ff821909"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.032598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.033085 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.533067333 +0000 UTC m=+160.982972114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.083481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" event={"ID":"16e8015c-ce8b-4b4e-9d4d-4f01c0d07b8d","Type":"ContainerStarted","Data":"dc4ac0a6c8b48eb3e132b9698adb597799e5c90cbf998db3a56ce970ecd14204"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.097204 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" event={"ID":"15a46c73-a8f2-427f-a701-01ccad52c6a1","Type":"ContainerStarted","Data":"f2c6bbada562da92b79ea1b845bd220e7b8a1e2fe2876a76da7080e8dae09bcd"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.101488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" event={"ID":"c999b3d9-4231-4163-821a-b759599c6510","Type":"ContainerStarted","Data":"3d60c2f5f6053877f0515cfcdc7ec718d9a6e50ee6abfec8eba5eadc5f264f63"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.121350 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5d9bz" event={"ID":"33b90f5a-a103-48d8-9eb1-fd7a153250ac","Type":"ContainerStarted","Data":"a194df4419f19dc760bce972698885958e3d8944f1106398ef9790de8436c302"} Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.121468 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.133951 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.134391 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.63435766 +0000 UTC m=+161.084262441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.134565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.142109 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.642082114 +0000 UTC m=+161.091986895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.144630 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.144673 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.227005 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.236362 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.236876 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.736755928 +0000 UTC m=+161.186660700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.237930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.239543 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.739486348 +0000 UTC m=+161.189391129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.317291 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zcx2c" podStartSLOduration=141.3172642 podStartE2EDuration="2m21.3172642s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:35.313643595 +0000 UTC m=+160.763548396" watchObservedRunningTime="2026-01-09 10:48:35.3172642 +0000 UTC m=+160.767168981" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.339980 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.341134 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.841116093 +0000 UTC m=+161.291020874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.434889 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-99dfz" podStartSLOduration=5.4348613 podStartE2EDuration="5.4348613s" podCreationTimestamp="2026-01-09 10:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:35.430763831 +0000 UTC m=+160.880668612" watchObservedRunningTime="2026-01-09 10:48:35.4348613 +0000 UTC m=+160.884766081" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.442343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.442775 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:35.94276132 +0000 UTC m=+161.392666101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.472423 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5d9bz" podStartSLOduration=141.472398212 podStartE2EDuration="2m21.472398212s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:35.469119077 +0000 UTC m=+160.919023858" watchObservedRunningTime="2026-01-09 10:48:35.472398212 +0000 UTC m=+160.922302993" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.543811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.545728 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.043992995 +0000 UTC m=+161.493897776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.550708 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.551612 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.051588006 +0000 UTC m=+161.501492787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.587725 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xj755" podStartSLOduration=141.587698116 podStartE2EDuration="2m21.587698116s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:35.584007169 +0000 UTC m=+161.033911960" watchObservedRunningTime="2026-01-09 10:48:35.587698116 +0000 UTC m=+161.037602897" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.645240 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:35 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:35 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:35 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.645326 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.651866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.652082 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.152041488 +0000 UTC m=+161.601946259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.652254 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.652780 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.152770469 +0000 UTC m=+161.602675250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.752958 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.753526 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.253478629 +0000 UTC m=+161.703383410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.861558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.862785 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.362765218 +0000 UTC m=+161.812670009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.918833 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xqcqv"] Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.929609 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.953648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.967559 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.967785 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.467746951 +0000 UTC m=+161.917651732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.967914 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt"] Jan 09 10:48:35 crc kubenswrapper[4727]: I0109 10:48:35.967996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:35 crc kubenswrapper[4727]: E0109 10:48:35.968521 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.468477672 +0000 UTC m=+161.918382453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.023259 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode375e91d_f60e_4b86_87ee_a043c2b81128.slice/crio-b9df1524d9fcbf6ee36073db6f7cd342443cc2d895b711b290eda71abf833a04 WatchSource:0}: Error finding container b9df1524d9fcbf6ee36073db6f7cd342443cc2d895b711b290eda71abf833a04: Status 404 returned error can't find the container with id b9df1524d9fcbf6ee36073db6f7cd342443cc2d895b711b290eda71abf833a04 Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.029827 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.046655 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx72n"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.051692 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.059962 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.060241 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.072534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.073050 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.573025294 +0000 UTC m=+162.022930075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.087585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.090689 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.098490 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.105122 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.108823 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.110075 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-25xhd"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.123087 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" event={"ID":"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2","Type":"ContainerStarted","Data":"5a8e8755d9e7d9c0446931945fedc4c8e9e3bc443bf709901f0ac3a73068dc47"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.125466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" event={"ID":"e375e91d-f60e-4b86-87ee-a043c2b81128","Type":"ContainerStarted","Data":"b9df1524d9fcbf6ee36073db6f7cd342443cc2d895b711b290eda71abf833a04"} Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.130716 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod096c2622_3648_4579_8139_9d3a8d4a9006.slice/crio-73de36eaaa27196dacd78249fcc5cbdaddf690773c0cdb157f16810acba14eee WatchSource:0}: Error finding container 73de36eaaa27196dacd78249fcc5cbdaddf690773c0cdb157f16810acba14eee: Status 404 returned error can't find the container with id 73de36eaaa27196dacd78249fcc5cbdaddf690773c0cdb157f16810acba14eee Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.132892 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pjc7c" event={"ID":"bab7ad75-cb15-4910-a013-e9cafba90f73","Type":"ContainerStarted","Data":"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87"} Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.134948 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2640d0ff_e8c2_4795_bf96_9b862e10de22.slice/crio-be9ec6f5a035f5a5c00d05fc3bc8cd1266029a5a4332bbfe31fe846b33b8d381 WatchSource:0}: Error finding container be9ec6f5a035f5a5c00d05fc3bc8cd1266029a5a4332bbfe31fe846b33b8d381: Status 404 returned error can't find the container with id be9ec6f5a035f5a5c00d05fc3bc8cd1266029a5a4332bbfe31fe846b33b8d381 Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.145306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" event={"ID":"15a46c73-a8f2-427f-a701-01ccad52c6a1","Type":"ContainerStarted","Data":"3190019309aae729aec535edfcc30635b29c8e3223896dc72f3bf1f5351dc951"} Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.159413 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf77f8346_e5c5_4f5e_9ac5_71fc4018dd2f.slice/crio-21db95af1db6d65f4eb97915d776887e1718bcc57bb0407a287e9df4857aa9d1 WatchSource:0}: Error finding container 21db95af1db6d65f4eb97915d776887e1718bcc57bb0407a287e9df4857aa9d1: Status 404 returned error can't find the container with id 21db95af1db6d65f4eb97915d776887e1718bcc57bb0407a287e9df4857aa9d1 Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.160317 4727 generic.go:334] "Generic (PLEG): container finished" podID="198987e6-b5aa-4331-9e5e-4a51a02ab712" containerID="d18920e2a077c5cda46113e3cc3f62a4796c5e64a833e970353657224ca906d6" exitCode=0 Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.160566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" event={"ID":"198987e6-b5aa-4331-9e5e-4a51a02ab712","Type":"ContainerDied","Data":"d18920e2a077c5cda46113e3cc3f62a4796c5e64a833e970353657224ca906d6"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.166465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"64c352984de1b1d53dfe07338f72589b6d7e501da4b28d4d63632f549e463612"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.171738 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-pjc7c" podStartSLOduration=143.171711294 podStartE2EDuration="2m23.171711294s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:36.155937985 +0000 UTC m=+161.605842776" watchObservedRunningTime="2026-01-09 10:48:36.171711294 +0000 UTC m=+161.621616095" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.175036 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.175153 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.176618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" event={"ID":"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd","Type":"ContainerStarted","Data":"ad82146e8d47df4ecdb309d20d0467e475d2f1c2c2694bb4124965245fd62da4"} Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.189154 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.689127651 +0000 UTC m=+162.139032432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.189635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a29665a-01da-4439-b13d-3950bf573044-metrics-certs\") pod \"network-metrics-daemon-vhsj4\" (UID: \"6a29665a-01da-4439-b13d-3950bf573044\") " pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.191187 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vhsj4" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.225759 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" event={"ID":"79d72458-cb87-481a-9697-4377383c296e","Type":"ContainerStarted","Data":"cb8511618c1168f1b695c78cda0dcd1111aea86736fe3350e8e14bc57a092c35"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.274926 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wxzs5" podStartSLOduration=142.274905106 podStartE2EDuration="2m22.274905106s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:36.186768292 +0000 UTC m=+161.636673093" watchObservedRunningTime="2026-01-09 10:48:36.274905106 +0000 UTC m=+161.724809887" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.279722 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" event={"ID":"7604b799-797e-4127-84cf-3f7e1c17dc87","Type":"ContainerStarted","Data":"4a7f5a18dbb009a7091c2259d98bfa96692fda4d16903837f198aa560dcf585e"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.281239 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.281588 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.781481927 +0000 UTC m=+162.231386708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.282002 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.284501 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.784485605 +0000 UTC m=+162.234390576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.299035 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" event={"ID":"c999b3d9-4231-4163-821a-b759599c6510","Type":"ContainerStarted","Data":"2d2e741862a7a5ade9e107ff6bffbc5b387df0cbea4a3a6ba65415f7abf29614"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.299116 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" event={"ID":"c999b3d9-4231-4163-821a-b759599c6510","Type":"ContainerStarted","Data":"1b97fee42eafd23030e56f1a8dc68377690db45fdc4b7a19cbaa8f030ee72356"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.300839 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" event={"ID":"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6","Type":"ContainerStarted","Data":"6e25a29358077a675b378ed578a122e4372977e0549b974e46808032fef13ad6"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.302450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" event={"ID":"01aaae54-a546-4083-88ea-d3adc6a3ea7e","Type":"ContainerStarted","Data":"887701e00f73eb4322aa6d1e2bd519ba9d9e95d1edd0663c388315ca72c944aa"} Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.304787 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.304869 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.310398 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tvd7t"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.311339 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" podStartSLOduration=142.311313845 podStartE2EDuration="2m22.311313845s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:36.308363779 +0000 UTC m=+161.758268580" watchObservedRunningTime="2026-01-09 10:48:36.311313845 +0000 UTC m=+161.761218646" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.382001 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9zbmm" podStartSLOduration=143.381970871 podStartE2EDuration="2m23.381970871s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:36.351564606 +0000 UTC m=+161.801469397" watchObservedRunningTime="2026-01-09 10:48:36.381970871 +0000 UTC m=+161.831875652" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.382503 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ppcsh"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.383151 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.384829 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.884809353 +0000 UTC m=+162.334714134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.395628 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nz6pf"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.396934 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9b2sc"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.400498 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.419464 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.425766 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.432480 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.436011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.437799 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.484832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.485239 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:36.985223793 +0000 UTC m=+162.435128574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.488956 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:36 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:36 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:36 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.489032 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.490973 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.521083 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkdts"] Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.586154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.586415 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.086356716 +0000 UTC m=+162.536261497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.586654 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.587141 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.087125358 +0000 UTC m=+162.537030139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.609980 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2ba90a_b9c8_4dbd_a1f5_324e3f12da9c.slice/crio-51de440a4134b990794465996782cd095f20128b54bd9e5761b3ef1528997de9 WatchSource:0}: Error finding container 51de440a4134b990794465996782cd095f20128b54bd9e5761b3ef1528997de9: Status 404 returned error can't find the container with id 51de440a4134b990794465996782cd095f20128b54bd9e5761b3ef1528997de9 Jan 09 10:48:36 crc kubenswrapper[4727]: W0109 10:48:36.664746 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod423f9db2_b3a1_406d_b906_bc4ba37fdb55.slice/crio-95e3c7d4c5cc676c8acce0ec2e73a946f1109d791adb6eb3896ec0bf3de9ccee WatchSource:0}: Error finding container 95e3c7d4c5cc676c8acce0ec2e73a946f1109d791adb6eb3896ec0bf3de9ccee: Status 404 returned error can't find the container with id 95e3c7d4c5cc676c8acce0ec2e73a946f1109d791adb6eb3896ec0bf3de9ccee Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.677867 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n4g9c" Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.688785 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.689269 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.189247008 +0000 UTC m=+162.639151789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.791103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.791675 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.291646527 +0000 UTC m=+162.741551428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.893669 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.893965 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.393916862 +0000 UTC m=+162.843821653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.894051 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.894620 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.394599692 +0000 UTC m=+162.844504473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:36 crc kubenswrapper[4727]: I0109 10:48:36.995395 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:36 crc kubenswrapper[4727]: E0109 10:48:36.995924 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.495893028 +0000 UTC m=+162.945797809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.000949 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vhsj4"] Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.097301 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.098009 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.597981808 +0000 UTC m=+163.047886589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.199757 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.200219 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.700193531 +0000 UTC m=+163.150098312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.301984 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.302779 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.802764114 +0000 UTC m=+163.252668885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.395871 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.395935 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.404494 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.405053 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:37.905030539 +0000 UTC m=+163.354935320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.410976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" event={"ID":"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f","Type":"ContainerStarted","Data":"801a771c7a0625bfc59b15b1cf0fc993257825d49ccd5fc9671333700c59dd02"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.411071 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" event={"ID":"f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f","Type":"ContainerStarted","Data":"21db95af1db6d65f4eb97915d776887e1718bcc57bb0407a287e9df4857aa9d1"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.414487 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.416975 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.439469 4727 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lkqbn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.439564 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" podUID="f77f8346-e5c5-4f5e-9ac5-71fc4018dd2f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.456617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" event={"ID":"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e","Type":"ContainerStarted","Data":"b171c00f455526146e644db10192475f94aa9ea83ccb51d6fa6430e1a72f5e6b"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.456697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" event={"ID":"7a8e8d16-796c-4b3e-a29c-c5356e7dde5e","Type":"ContainerStarted","Data":"11b5c1b051f4f1c9c843c0c72eef5f67ef0896cb1bcbf9f5d9c53648a697cde9"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.473049 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" event={"ID":"879d1222-addb-406a-b8fd-3ce4068c1d08","Type":"ContainerStarted","Data":"e71309d06338273bd0d538a59a5b81b2b5a63d25187d459b41c96fd68aad5695"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.493156 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:37 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:37 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:37 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.493254 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.506728 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.508483 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.008468918 +0000 UTC m=+163.458373699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.530100 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" event={"ID":"fe3c54e0-1aca-48bf-a737-cdb8c507f66d","Type":"ContainerStarted","Data":"bd138eb3645b214256a3bd5769c05bfcec82a07c0ad7d8f5894397afb8cfeb73"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.530158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" event={"ID":"fe3c54e0-1aca-48bf-a737-cdb8c507f66d","Type":"ContainerStarted","Data":"0af779d575cb512d96688dbd2794c73058049e86f35a57dc18ef9d9fe97ea3d9"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.538470 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" podStartSLOduration=143.538444871 podStartE2EDuration="2m23.538444871s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.476159238 +0000 UTC m=+162.926064009" watchObservedRunningTime="2026-01-09 10:48:37.538444871 +0000 UTC m=+162.988349652" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.552453 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" event={"ID":"cde39c3d-01e5-4ac6-b29b-b3171ca7eaf6","Type":"ContainerStarted","Data":"f67f44b6b817ada0d7e7583dd6706384d4c9284cd900dbd8a8a91af861231be4"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.556140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" event={"ID":"e375e91d-f60e-4b86-87ee-a043c2b81128","Type":"ContainerStarted","Data":"bab5a31c30b737153a7f184cf59a19984c1e9c5ceb52342c8221105f7a4fceb1"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.566091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" event={"ID":"e0621386-4e3b-422a-93db-adcd616daa7a","Type":"ContainerStarted","Data":"5f109339328ab84a5716df70bed4af6fada6f4467a56eaee2356f054dc120050"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.566145 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" event={"ID":"e0621386-4e3b-422a-93db-adcd616daa7a","Type":"ContainerStarted","Data":"40000b60cc1b2bd6a0d5284af3ef9e33ee5ed205fb3182b76bbcee3681754dae"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.576310 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tfrb7" podStartSLOduration=143.576289391 podStartE2EDuration="2m23.576289391s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.542418585 +0000 UTC m=+162.992323366" watchObservedRunningTime="2026-01-09 10:48:37.576289391 +0000 UTC m=+163.026194172" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.584791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" event={"ID":"6a29665a-01da-4439-b13d-3950bf573044","Type":"ContainerStarted","Data":"c50cce3f7a2384c4ffeb17558511922a4c2d4961f8e77846d3b83a8f8e029466"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.604907 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" event={"ID":"01aaae54-a546-4083-88ea-d3adc6a3ea7e","Type":"ContainerStarted","Data":"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.605794 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.608412 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.609855 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.109838017 +0000 UTC m=+163.559742798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.625420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" event={"ID":"be8a84bb-6eb3-4f11-8730-1bcb378cafa9","Type":"ContainerStarted","Data":"06d04df3dac6010b8701e6906839b91dc9bca91a2a8fd0afb2cc7f177e237e46"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.626275 4727 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ldkw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.626337 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.643491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" event={"ID":"b80bab42-ad32-4ec1-83c3-d939b007a97b","Type":"ContainerStarted","Data":"bf7c09a3701b9efda131588870469c1b6268f38bdcea1980699756debdae5027"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.645582 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.648963 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrfkk" podStartSLOduration=143.648928253 podStartE2EDuration="2m23.648928253s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.620082915 +0000 UTC m=+163.069987716" watchObservedRunningTime="2026-01-09 10:48:37.648928253 +0000 UTC m=+163.098833034" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.664789 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-75slj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.665181 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.666428 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" event={"ID":"50dba57c-02ba-4204-a8d0-6f95ffed6db7","Type":"ContainerStarted","Data":"ab02b688fc95345574fda9a402f623919439933b5100b4c9a90d423bdd099e96"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.667867 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.682095 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" event={"ID":"8e3b3a7a-6c2e-4bb5-8768-be94244740aa","Type":"ContainerStarted","Data":"2490ae4a3583e255dcbfb1794cba7dee8f901c42e3866e961afdc986e93bdb4d"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.691016 4727 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jtjg7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.691088 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" podUID="50dba57c-02ba-4204-a8d0-6f95ffed6db7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.697389 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podStartSLOduration=143.697367633 podStartE2EDuration="2m23.697367633s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.696191468 +0000 UTC m=+163.146096259" watchObservedRunningTime="2026-01-09 10:48:37.697367633 +0000 UTC m=+163.147272424" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.697675 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5b5mt" podStartSLOduration=143.697669941 podStartE2EDuration="2m23.697669941s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.657802962 +0000 UTC m=+163.107707763" watchObservedRunningTime="2026-01-09 10:48:37.697669941 +0000 UTC m=+163.147574712" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.711737 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.716113 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.216082347 +0000 UTC m=+163.665987318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.745386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" event={"ID":"7e76cc6a-976f-4e61-8829-bbf3c4313293","Type":"ContainerStarted","Data":"6b3fd50c00f39b9584c418d0c77da39d12f67cee5675dbefcd5b5c3144112020"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.764978 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" event={"ID":"2640d0ff-e8c2-4795-bf96-9b862e10de22","Type":"ContainerStarted","Data":"be9ec6f5a035f5a5c00d05fc3bc8cd1266029a5a4332bbfe31fe846b33b8d381"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.780857 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" podStartSLOduration=144.780836481 podStartE2EDuration="2m24.780836481s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.761220661 +0000 UTC m=+163.211125462" watchObservedRunningTime="2026-01-09 10:48:37.780836481 +0000 UTC m=+163.230741262" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.788849 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" event={"ID":"aa62f546-f6a1-46e8-9023-482a9e2e04b6","Type":"ContainerStarted","Data":"5e576fb8ee950301ba9a269bde7e48c0fdb07ccd317e16b1cf7c1c911c8712cc"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.818157 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.819960 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.319938449 +0000 UTC m=+163.769843240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.825105 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" event={"ID":"402cb251-6fda-417f-a9bf-30b59833a3cd","Type":"ContainerStarted","Data":"38d8ee1550f83a30b9189001e716189d87cfbff3cc78978ff319f10454e64e54"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.834033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" event={"ID":"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7","Type":"ContainerStarted","Data":"79d9ed59f4af60327a6223aa4d7908523fae3d2aacc537c101eff19740772d33"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.841203 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" podStartSLOduration=143.841184877 podStartE2EDuration="2m23.841184877s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.819292709 +0000 UTC m=+163.269197490" watchObservedRunningTime="2026-01-09 10:48:37.841184877 +0000 UTC m=+163.291089658" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.880111 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" podStartSLOduration=143.880088879 podStartE2EDuration="2m23.880088879s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.875794923 +0000 UTC m=+163.325699704" watchObservedRunningTime="2026-01-09 10:48:37.880088879 +0000 UTC m=+163.329993670" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.880961 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" podStartSLOduration=143.880956113 podStartE2EDuration="2m23.880956113s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:37.842414103 +0000 UTC m=+163.292318884" watchObservedRunningTime="2026-01-09 10:48:37.880956113 +0000 UTC m=+163.330860884" Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.921448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:37 crc kubenswrapper[4727]: E0109 10:48:37.921808 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.421792181 +0000 UTC m=+163.871696962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.972847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" event={"ID":"096c2622-3648-4579-8139-9d3a8d4a9006","Type":"ContainerStarted","Data":"907b67f6ed0eb76717e264b7b5b4ee1c06cbe9e1598e02ceb280a758a65b41c1"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.973024 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" event={"ID":"096c2622-3648-4579-8139-9d3a8d4a9006","Type":"ContainerStarted","Data":"73de36eaaa27196dacd78249fcc5cbdaddf690773c0cdb157f16810acba14eee"} Jan 09 10:48:37 crc kubenswrapper[4727]: I0109 10:48:37.985412 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" event={"ID":"423f9db2-b3a1-406d-b906-bc4ba37fdb55","Type":"ContainerStarted","Data":"95e3c7d4c5cc676c8acce0ec2e73a946f1109d791adb6eb3896ec0bf3de9ccee"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.005066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" event={"ID":"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd","Type":"ContainerStarted","Data":"f8891a6ceb5a8bd1111f85d1497013020d91fd3ea1005f453e8623903820a18d"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.022985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.024367 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.524352325 +0000 UTC m=+163.974257106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.039332 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" podStartSLOduration=144.039306179 podStartE2EDuration="2m24.039306179s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:38.037126926 +0000 UTC m=+163.487031717" watchObservedRunningTime="2026-01-09 10:48:38.039306179 +0000 UTC m=+163.489210960" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.046030 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ppcsh" event={"ID":"27d5037e-e25b-4865-a1fe-7d165be1bf23","Type":"ContainerStarted","Data":"a28c063b1a8f11351ce12639f86cb865a33fed91f38ec293190f61afd87867de"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.057098 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" event={"ID":"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c","Type":"ContainerStarted","Data":"51de440a4134b990794465996782cd095f20128b54bd9e5761b3ef1528997de9"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.059386 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.078745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" event={"ID":"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2","Type":"ContainerStarted","Data":"85eaa9bf4508b8c054caa41cb60845ace2283fd2b119bd2e72b11e7f4c533e00"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.091278 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" podStartSLOduration=144.09125214 podStartE2EDuration="2m24.09125214s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:38.08643635 +0000 UTC m=+163.536341131" watchObservedRunningTime="2026-01-09 10:48:38.09125214 +0000 UTC m=+163.541156921" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.113988 4727 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xs5vp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.114066 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" podUID="cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.124813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.129540 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.629494383 +0000 UTC m=+164.079399164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.130901 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" event={"ID":"79d72458-cb87-481a-9697-4377383c296e","Type":"ContainerStarted","Data":"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.131008 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.149796 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"96b22b2496db97d4d425031e52d6bce980c79a1600cdb97041a6c7cab8f9b132"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.161959 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlqcc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.162039 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.170177 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" podStartSLOduration=144.170154776 podStartE2EDuration="2m24.170154776s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:38.16892824 +0000 UTC m=+163.618833041" watchObservedRunningTime="2026-01-09 10:48:38.170154776 +0000 UTC m=+163.620059547" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.178502 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tvd7t" event={"ID":"8674271c-47a7-4722-9ceb-76e787b31485","Type":"ContainerStarted","Data":"945e9c3d527267f507f58dd0ce23f0c21ff89a8f15ec11b4ba5daf447cb9e23c"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.215054 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tvd7t" podStartSLOduration=8.215028361 podStartE2EDuration="8.215028361s" podCreationTimestamp="2026-01-09 10:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:38.2143291 +0000 UTC m=+163.664233881" watchObservedRunningTime="2026-01-09 10:48:38.215028361 +0000 UTC m=+163.664933142" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.226941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" event={"ID":"ff5b64d7-46ec-4f56-a044-4b57c96ebc03","Type":"ContainerStarted","Data":"3573280a8022ed2fdbb35102bc01caaff6fa5f9751d8bf241517a0363353173f"} Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.243696 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.288400 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gqtf6" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.298287 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.798228961 +0000 UTC m=+164.248133742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.354090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.356470 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.856456375 +0000 UTC m=+164.306361156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.462792 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.464040 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:38.964009824 +0000 UTC m=+164.413914605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.494041 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:38 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:38 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:38 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.494142 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.566319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.566838 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.066817934 +0000 UTC m=+164.516722715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.667869 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.668297 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.168275555 +0000 UTC m=+164.618180336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.770844 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.772523 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.272485876 +0000 UTC m=+164.722390647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.874410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.874697 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.374640028 +0000 UTC m=+164.824544809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.874764 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.875124 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.375109611 +0000 UTC m=+164.825014392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.976108 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.976401 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.476355497 +0000 UTC m=+164.926260288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:38 crc kubenswrapper[4727]: I0109 10:48:38.976627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:38 crc kubenswrapper[4727]: E0109 10:48:38.977469 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.477458798 +0000 UTC m=+164.927363579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.077950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.078350 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.578303632 +0000 UTC m=+165.028208423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.078652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.079352 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.579341532 +0000 UTC m=+165.029246313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.179862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.180018 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.67996969 +0000 UTC m=+165.129874481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.180781 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.181294 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.681271857 +0000 UTC m=+165.131176638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.249618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" event={"ID":"be8a84bb-6eb3-4f11-8730-1bcb378cafa9","Type":"ContainerStarted","Data":"f17bc121e96a5a3a51ae16ccc0f9c9927126e182a8d2bea0f87316a012a17b7c"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.251670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" event={"ID":"d3ee2782-e2b4-41bf-8633-000ccd1fb4d2","Type":"ContainerStarted","Data":"cdc86f2e00aa0249fe3898231615d899769b4e2722c517d0b80c2c9538b03224"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.257751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-25xhd" event={"ID":"402cb251-6fda-417f-a9bf-30b59833a3cd","Type":"ContainerStarted","Data":"3391b00f1c4d60a4352a89f22e9c984d269778a0c6133f0b2fd79b74f9de3b2b"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.268491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" event={"ID":"879d1222-addb-406a-b8fd-3ce4068c1d08","Type":"ContainerStarted","Data":"e5370f5dbfb07ce0b3a3ebc6279659aaf4bb39fd9bf8468506ef4f4fe1facf2b"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.279432 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" event={"ID":"76c2db54-b4ef-4798-ac0e-4bdeaa6053f7","Type":"ContainerStarted","Data":"176c47728240b5f7d4ec21b50e8b6f426f91eb78be302d0da850597fc66d8984"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.282749 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.282878 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.782848472 +0000 UTC m=+165.232753253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.283139 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.283561 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.783536832 +0000 UTC m=+165.233441633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.285933 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" event={"ID":"096c2622-3648-4579-8139-9d3a8d4a9006","Type":"ContainerStarted","Data":"7991ba421ab1162f4e8eef03610dce33434fb4c8e56a6d2e93189a5a9aa0efff"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.308983 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx72n" podStartSLOduration=145.308961122 podStartE2EDuration="2m25.308961122s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.308229011 +0000 UTC m=+164.758133812" watchObservedRunningTime="2026-01-09 10:48:39.308961122 +0000 UTC m=+164.758865903" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.310563 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-nz6pf" podStartSLOduration=145.310558088 podStartE2EDuration="2m25.310558088s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.275437076 +0000 UTC m=+164.725341877" watchObservedRunningTime="2026-01-09 10:48:39.310558088 +0000 UTC m=+164.760462869" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.310816 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tvd7t" event={"ID":"8674271c-47a7-4722-9ceb-76e787b31485","Type":"ContainerStarted","Data":"efd7c01970885d7d711b6bc3c7616038082862b6a1884f23a5727799be34b097"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.325539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" event={"ID":"8e3b3a7a-6c2e-4bb5-8768-be94244740aa","Type":"ContainerStarted","Data":"198ce0c48a97ae659a008bf4fb01528f1083d6bffa1f27c6c0a6668ac5b1db08"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.364705 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" event={"ID":"423f9db2-b3a1-406d-b906-bc4ba37fdb55","Type":"ContainerStarted","Data":"4f02fcb34ab3f66ca98113730cb607d8fc22c1dab41b0c4cc758db422fb293f7"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.365116 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w6pvx" podStartSLOduration=145.365101385 podStartE2EDuration="2m25.365101385s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.364385454 +0000 UTC m=+164.814290255" watchObservedRunningTime="2026-01-09 10:48:39.365101385 +0000 UTC m=+164.815006166" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.380110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" event={"ID":"aa62f546-f6a1-46e8-9023-482a9e2e04b6","Type":"ContainerStarted","Data":"c66c664bbc6cebd5a8a70b99bfa16b183fb998286f49bbe05075cd690ee1810e"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.380165 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" event={"ID":"aa62f546-f6a1-46e8-9023-482a9e2e04b6","Type":"ContainerStarted","Data":"78bece42a06534eeb4055575f987efbfa8ec2a2e2516cd1eb6dc1a9e148e860f"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.380704 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.387414 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.387889 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.887854276 +0000 UTC m=+165.337759057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.388384 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.389616 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ppcsh" event={"ID":"27d5037e-e25b-4865-a1fe-7d165be1bf23","Type":"ContainerStarted","Data":"9043ea11d4bfa015f4d078b898a2506b6281612e1fd9774d5784b87a44da26ce"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.389659 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ppcsh" event={"ID":"27d5037e-e25b-4865-a1fe-7d165be1bf23","Type":"ContainerStarted","Data":"a7a1fe934ad6e1b1852854a7194c0087f2e8bfe0dac5789c767dabc34b77ca70"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.390215 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.392663 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.892645016 +0000 UTC m=+165.342549797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.421933 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.422013 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.428255 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" event={"ID":"7e76cc6a-976f-4e61-8829-bbf3c4313293","Type":"ContainerStarted","Data":"6ba9eb198b758659a2306f44a0c31794e4c21c53539dd0b9910c76bd53476ebd"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.440952 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" event={"ID":"cb2ba90a-b9c8-4dbd-a1f5-324e3f12da9c","Type":"ContainerStarted","Data":"1a1b5e1caf0b625ea3f0dced0bdc083507159de55142ad65650b7b588346ae6f"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.469852 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2m9hx" podStartSLOduration=145.469825241 podStartE2EDuration="2m25.469825241s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.468657997 +0000 UTC m=+164.918562808" watchObservedRunningTime="2026-01-09 10:48:39.469825241 +0000 UTC m=+164.919730022" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.473065 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tszhc" podStartSLOduration=145.473043074 podStartE2EDuration="2m25.473043074s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.40204036 +0000 UTC m=+164.851945151" watchObservedRunningTime="2026-01-09 10:48:39.473043074 +0000 UTC m=+164.922947865" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.481262 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs5vp" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.489552 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.491553 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:39.991527662 +0000 UTC m=+165.441432453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.497423 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:39 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:39 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:39 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.497493 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.505768 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7lhv" podStartSLOduration=145.505749276 podStartE2EDuration="2m25.505749276s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.505040415 +0000 UTC m=+164.954945206" watchObservedRunningTime="2026-01-09 10:48:39.505749276 +0000 UTC m=+164.955654057" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.548942 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" event={"ID":"fe3c54e0-1aca-48bf-a737-cdb8c507f66d","Type":"ContainerStarted","Data":"92e3c8d3498b69c691449ec52e4580576b50dca017312a82ed74a7a9b85c16a9"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.589709 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" podStartSLOduration=145.589686778 podStartE2EDuration="2m25.589686778s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.589231174 +0000 UTC m=+165.039135975" watchObservedRunningTime="2026-01-09 10:48:39.589686778 +0000 UTC m=+165.039591559" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.593950 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" event={"ID":"b80bab42-ad32-4ec1-83c3-d939b007a97b","Type":"ContainerStarted","Data":"cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.594305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.594778 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.094760015 +0000 UTC m=+165.544664796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.636817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" event={"ID":"50dba57c-02ba-4204-a8d0-6f95ffed6db7","Type":"ContainerStarted","Data":"e915b1081555dcde799dbeb2baf0b20b0e26a619a62d5a6c225eeafa2db8312d"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.638831 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rbqsq" podStartSLOduration=146.638805197 podStartE2EDuration="2m26.638805197s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.636217911 +0000 UTC m=+165.086122712" watchObservedRunningTime="2026-01-09 10:48:39.638805197 +0000 UTC m=+165.088709978" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.672676 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jtjg7" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.685947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" event={"ID":"e0621386-4e3b-422a-93db-adcd616daa7a","Type":"ContainerStarted","Data":"7044afa95d4a853088eee1ae0a900a7e0a082eff2c9323139d0cc56f3cd9c72c"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.705341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.707010 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.206982029 +0000 UTC m=+165.656886810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.727707 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ppcsh" podStartSLOduration=10.727682862 podStartE2EDuration="10.727682862s" podCreationTimestamp="2026-01-09 10:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.722774989 +0000 UTC m=+165.172679770" watchObservedRunningTime="2026-01-09 10:48:39.727682862 +0000 UTC m=+165.177587653" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.732381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" event={"ID":"6a29665a-01da-4439-b13d-3950bf573044","Type":"ContainerStarted","Data":"8f10a5d1cb7fca9de7bef059a7a6f653e8861716ff14d18ad84dbc869ca8327e"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.762567 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" event={"ID":"198987e6-b5aa-4331-9e5e-4a51a02ab712","Type":"ContainerStarted","Data":"1c3e07556aaa1ef418a783426fd229b444fd9dfb3f3bb091f13524103b97b3f1"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.763022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" event={"ID":"198987e6-b5aa-4331-9e5e-4a51a02ab712","Type":"ContainerStarted","Data":"35b96fa56688bb4a498cea3ba751f816b1c4710792ee5fb20818b2dd16dc557a"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.778339 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkdts" podStartSLOduration=146.778320164 podStartE2EDuration="2m26.778320164s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.775489362 +0000 UTC m=+165.225394153" watchObservedRunningTime="2026-01-09 10:48:39.778320164 +0000 UTC m=+165.228224935" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.795574 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" event={"ID":"ff5b64d7-46ec-4f56-a044-4b57c96ebc03","Type":"ContainerStarted","Data":"4f00542fa3797718e8f8f68230f017aad4b48904e67d41744a2635654f3af3d1"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.795635 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" event={"ID":"ff5b64d7-46ec-4f56-a044-4b57c96ebc03","Type":"ContainerStarted","Data":"ce8043b00a697d39e6150d97af2be0105403da2dbda5615fd710655842784f38"} Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.809406 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.830909 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnwbx" event={"ID":"2640d0ff-e8c2-4795-bf96-9b862e10de22","Type":"ContainerStarted","Data":"21731b9f0102bf163dcc63260bcdda7995c6a5398da2ebab35c8edb156e6f4b8"} Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.841205 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.341173033 +0000 UTC m=+165.791077814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.873807 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.885276 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lkqbn" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.893944 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.918135 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.918343 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.418323417 +0000 UTC m=+165.868228198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.921767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:39 crc kubenswrapper[4727]: E0109 10:48:39.924294 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.424278811 +0000 UTC m=+165.874183592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.942179 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" podStartSLOduration=146.94215485 podStartE2EDuration="2m26.94215485s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:39.877927042 +0000 UTC m=+165.327831853" watchObservedRunningTime="2026-01-09 10:48:39.94215485 +0000 UTC m=+165.392059631" Jan 09 10:48:39 crc kubenswrapper[4727]: I0109 10:48:39.964071 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.012479 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d2jb6" podStartSLOduration=146.012452235 podStartE2EDuration="2m26.012452235s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:40.011094976 +0000 UTC m=+165.460999767" watchObservedRunningTime="2026-01-09 10:48:40.012452235 +0000 UTC m=+165.462357026" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.022713 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.023543 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.523519057 +0000 UTC m=+165.973423838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.047376 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pk2gc" podStartSLOduration=147.04734739 podStartE2EDuration="2m27.04734739s" podCreationTimestamp="2026-01-09 10:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:40.044014153 +0000 UTC m=+165.493918954" watchObservedRunningTime="2026-01-09 10:48:40.04734739 +0000 UTC m=+165.497252171" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.128252 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.128686 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.628643645 +0000 UTC m=+166.078548426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.229261 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.230952 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.73091428 +0000 UTC m=+166.180819061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.231320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.231811 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.731795375 +0000 UTC m=+166.181700156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.332546 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.332931 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.832907116 +0000 UTC m=+166.282811887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.434460 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.434952 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:40.934934514 +0000 UTC m=+166.384839295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.443139 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.444301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.478444 4727 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.492929 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:40 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:40 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:40 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.493016 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.523055 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.535484 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.535864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.536002 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.536051 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.536224 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.03619002 +0000 UTC m=+166.486094801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.570726 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.638582 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.638639 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.638668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.638694 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.639095 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.139079273 +0000 UTC m=+166.588984054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.640248 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.640621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.659391 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9b2sc" podStartSLOduration=146.659361573 podStartE2EDuration="2m26.659361573s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:40.601795488 +0000 UTC m=+166.051700269" watchObservedRunningTime="2026-01-09 10:48:40.659361573 +0000 UTC m=+166.109266354" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.683550 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") pod \"certified-operators-qzjvr\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.740190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.740595 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.240570645 +0000 UTC m=+166.690475426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.749627 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.750884 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.764392 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.784808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.842116 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.842202 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.842243 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.842320 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.842864 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.34284891 +0000 UTC m=+166.792753691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.879692 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vhsj4" event={"ID":"6a29665a-01da-4439-b13d-3950bf573044","Type":"ContainerStarted","Data":"5e1b4ef8a5e34344096d1e1baf163e63534454b5b429e4cf7df8f6670cfb6c04"} Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.879766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"f38f0ba07c474f580b4e6f9ae3c73c71c9f7040d2572f9b245f9c3e90e1a2009"} Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.903213 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vhsj4" podStartSLOduration=146.903174075 podStartE2EDuration="2m26.903174075s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:40.899979151 +0000 UTC m=+166.349883932" watchObservedRunningTime="2026-01-09 10:48:40.903174075 +0000 UTC m=+166.353078856" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.943791 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.944082 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.444031223 +0000 UTC m=+166.893936014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.944163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.945477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.946074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.946194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.949597 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.950969 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.952884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.954445 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:40 crc kubenswrapper[4727]: E0109 10:48:40.958215 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.458190805 +0000 UTC m=+166.908095586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.967247 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 10:48:40 crc kubenswrapper[4727]: I0109 10:48:40.992904 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.040800 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") pod \"certified-operators-d2hxb\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.051163 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.051482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.051553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.051574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: E0109 10:48:41.051728 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.551704695 +0000 UTC m=+167.001609476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.077890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.154652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.154723 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.154746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.154768 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:41 crc kubenswrapper[4727]: E0109 10:48:41.155181 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.655166155 +0000 UTC m=+167.105070936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wfhcs" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.155802 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.156030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.170209 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.186663 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.187398 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.199071 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") pod \"community-operators-lj7dw\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.270219 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.270552 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.270600 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.270625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: E0109 10:48:41.270789 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-09 10:48:41.770766337 +0000 UTC m=+167.220671108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.290286 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.307303 4727 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-09T10:48:40.478486791Z","Handler":null,"Name":""} Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.346776 4727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.346823 4727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.380603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.380665 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.380698 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.380746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.382237 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.382821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.390334 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.390389 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.428999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") pod \"community-operators-tlqjk\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.496453 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:41 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:41 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:41 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.496572 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.544729 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.630104 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wfhcs\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.658016 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.694952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.722192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 10:48:41 crc kubenswrapper[4727]: I0109 10:48:41.728964 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.050257 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerStarted","Data":"fb23bdfd131c74ca699783debec87aba4e592b8f689b5331a1ea091df7d605ad"} Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.062585 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"ed649f8bba2b6a1a1bdd2f6edb8806bcb6cf173c0ad14cf474f40d6662ecc2fd"} Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.127875 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.483954 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.535567 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:42 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:42 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:42 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.535676 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.564996 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.566499 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.582223 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.622805 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.651555 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.652067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.652125 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.754407 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.754471 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.754531 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.755609 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.755913 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.768135 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.838416 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.861875 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") pod \"redhat-marketplace-dtgwm\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.924416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.936391 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.951278 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.957541 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:42 crc kubenswrapper[4727]: I0109 10:48:42.968368 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.057497 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.058443 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.062752 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.068485 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.076024 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.076113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.076135 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.083012 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.133583 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.133745 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.133829 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.134290 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.134854 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.163605 4727 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8lqcl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]log ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]etcd ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/generic-apiserver-start-informers ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/max-in-flight-filter ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 09 10:48:43 crc kubenswrapper[4727]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 09 10:48:43 crc kubenswrapper[4727]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/project.openshift.io-projectcache ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-startinformers ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 09 10:48:43 crc kubenswrapper[4727]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 09 10:48:43 crc kubenswrapper[4727]: livez check failed Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.163691 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" podUID="198987e6-b5aa-4331-9e5e-4a51a02ab712" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.172376 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.172455 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.181498 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerStarted","Data":"4e7da0de585649169fd8cf1b1066a4fe59cfd2aac18387a51307fee26f57796c"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183224 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.183390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.184011 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.184289 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.191538 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerStarted","Data":"a179ea666208967ecfd43822950b057cd35581408873a5090e17c2f3344f91f0"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.210480 4727 generic.go:334] "Generic (PLEG): container finished" podID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" containerID="f8891a6ceb5a8bd1111f85d1497013020d91fd3ea1005f453e8623903820a18d" exitCode=0 Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.210617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" event={"ID":"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd","Type":"ContainerDied","Data":"f8891a6ceb5a8bd1111f85d1497013020d91fd3ea1005f453e8623903820a18d"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.237163 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.237217 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.239212 4727 patch_prober.go:28] interesting pod/console-f9d7485db-pjc7c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.239256 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pjc7c" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.247391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") pod \"redhat-marketplace-pgnj5\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.250201 4727 generic.go:334] "Generic (PLEG): container finished" podID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerID="d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e" exitCode=0 Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.250391 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerDied","Data":"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.250438 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerStarted","Data":"fdad070e71d4bbce550062d735b7d4a59eda1ba60bd27a561289a761c73ac4de"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.259288 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.270907 4727 generic.go:334] "Generic (PLEG): container finished" podID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerID="aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188" exitCode=0 Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.271365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerDied","Data":"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.287102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.287159 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.290363 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.341589 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.354689 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" event={"ID":"414cbbdd-31b2-4eae-84a7-33cd1a4961b5","Type":"ContainerStarted","Data":"1430957bbab696cf47fc69d0b7a87908c92a1d373838b2994a4675be3c429e36"} Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.401150 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.485914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.494766 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:43 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:43 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:43 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.494848 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.731419 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" podStartSLOduration=13.731391903 podStartE2EDuration="13.731391903s" podCreationTimestamp="2026-01-09 10:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:43.404185945 +0000 UTC m=+168.854090726" watchObservedRunningTime="2026-01-09 10:48:43.731391903 +0000 UTC m=+169.181296684" Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.735283 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.991002 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:48:43 crc kubenswrapper[4727]: I0109 10:48:43.993074 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.006077 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.017385 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.140400 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.140474 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.140534 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.241863 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.241946 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.241985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.242634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.243058 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.283157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") pod \"redhat-operators-dpfxv\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.322312 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.352474 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.356957 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.361990 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.381006 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.402425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" event={"ID":"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5","Type":"ContainerStarted","Data":"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.402487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" event={"ID":"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5","Type":"ContainerStarted","Data":"ddbd37f0ce66367420bf898e597290bc9a838afaf3a3a6e5e804343b2dd74136"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.403318 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.437749 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.439070 4727 generic.go:334] "Generic (PLEG): container finished" podID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerID="55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729" exitCode=0 Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.439216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerDied","Data":"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.439259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerStarted","Data":"974cefab389bdd1c50fa8159159be952f608b390b753f134588ad26e90c6144f"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.457954 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.458105 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.458164 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.478530 4727 generic.go:334] "Generic (PLEG): container finished" podID="847f9d70-de5c-4bc0-9823-c4074e353565" containerID="d91d351a8c554abc2fdcaa83ba21ac1cd2528cb470f7cc7b072bc6c71cf7875d" exitCode=0 Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.478680 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerDied","Data":"d91d351a8c554abc2fdcaa83ba21ac1cd2528cb470f7cc7b072bc6c71cf7875d"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.531389 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:44 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:44 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:44 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.533788 4727 generic.go:334] "Generic (PLEG): container finished" podID="f7741215-a775-4b93-9062-45e620560d49" containerID="394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29" exitCode=0 Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.534467 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" podStartSLOduration=150.534451612 podStartE2EDuration="2m30.534451612s" podCreationTimestamp="2026-01-09 10:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:44.45666759 +0000 UTC m=+169.906572401" watchObservedRunningTime="2026-01-09 10:48:44.534451612 +0000 UTC m=+169.984356393" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.538597 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.539471 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerDied","Data":"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29"} Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.560492 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.560604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.560734 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.566286 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.566644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.621357 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") pod \"redhat-operators-qdwnw\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:44 crc kubenswrapper[4727]: I0109 10:48:44.720784 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.187192 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.285216 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") pod \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.286038 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") pod \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.286124 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") pod \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\" (UID: \"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd\") " Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.287087 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume" (OuterVolumeSpecName: "config-volume") pod "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" (UID: "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.300453 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" (UID: "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.323529 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8" (OuterVolumeSpecName: "kube-api-access-gcsp8") pod "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" (UID: "a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd"). InnerVolumeSpecName "kube-api-access-gcsp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.352367 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.387491 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.387559 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.387573 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcsp8\" (UniqueName: \"kubernetes.io/projected/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd-kube-api-access-gcsp8\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:45 crc kubenswrapper[4727]: W0109 10:48:45.407317 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7e3f567_63b4_4a95_b9df_5ec10f0ec4f2.slice/crio-42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb WatchSource:0}: Error finding container 42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb: Status 404 returned error can't find the container with id 42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.448886 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:48:45 crc kubenswrapper[4727]: W0109 10:48:45.466777 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb9e6995_13ec_46a4_a659_0acc617449d3.slice/crio-5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e WatchSource:0}: Error finding container 5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e: Status 404 returned error can't find the container with id 5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.487388 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:45 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:45 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:45 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.487443 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.573348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerStarted","Data":"5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.576410 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerStarted","Data":"42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.584494 4727 generic.go:334] "Generic (PLEG): container finished" podID="52829665-e7e7-4322-a38e-731d67de0a1e" containerID="22ac19595fc4f0a184b8660c25bad2c44186a8659978bbc2fc9d9b604da4ef99" exitCode=0 Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.584610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerDied","Data":"22ac19595fc4f0a184b8660c25bad2c44186a8659978bbc2fc9d9b604da4ef99"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.584649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerStarted","Data":"301dab3d04bf736cfc1cfc161435219d3d49e05da644c5b2c0bdb5bb934e1806"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.593349 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"658c98ad-94ee-4294-a8b9-b2b041a83e37","Type":"ContainerStarted","Data":"451f4bab74d641f0b415344bd0f9f45f49b43975fc46c42177b88bbf21165424"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.593395 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"658c98ad-94ee-4294-a8b9-b2b041a83e37","Type":"ContainerStarted","Data":"a9e5e14ca11d9d94c3416a4d34194e4e50491ebe7c102192c37b8e67893ce2cd"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.603207 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" event={"ID":"a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd","Type":"ContainerDied","Data":"ad82146e8d47df4ecdb309d20d0467e475d2f1c2c2694bb4124965245fd62da4"} Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.603252 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.603265 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad82146e8d47df4ecdb309d20d0467e475d2f1c2c2694bb4124965245fd62da4" Jan 09 10:48:45 crc kubenswrapper[4727]: I0109 10:48:45.633587 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.633561323 podStartE2EDuration="2.633561323s" podCreationTimestamp="2026-01-09 10:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:45.629721831 +0000 UTC m=+171.079626612" watchObservedRunningTime="2026-01-09 10:48:45.633561323 +0000 UTC m=+171.083466114" Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.512878 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:46 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:46 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:46 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.512972 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.631661 4727 generic.go:334] "Generic (PLEG): container finished" podID="db9e6995-13ec-46a4-a659-0acc617449d3" containerID="4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24" exitCode=0 Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.631922 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerDied","Data":"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24"} Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.645081 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerID="d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222" exitCode=0 Jan 09 10:48:46 crc kubenswrapper[4727]: I0109 10:48:46.647093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerDied","Data":"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222"} Jan 09 10:48:47 crc kubenswrapper[4727]: I0109 10:48:47.502726 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:47 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:47 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:47 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:47 crc kubenswrapper[4727]: I0109 10:48:47.503308 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:47 crc kubenswrapper[4727]: I0109 10:48:47.666145 4727 generic.go:334] "Generic (PLEG): container finished" podID="658c98ad-94ee-4294-a8b9-b2b041a83e37" containerID="451f4bab74d641f0b415344bd0f9f45f49b43975fc46c42177b88bbf21165424" exitCode=0 Jan 09 10:48:47 crc kubenswrapper[4727]: I0109 10:48:47.666204 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"658c98ad-94ee-4294-a8b9-b2b041a83e37","Type":"ContainerDied","Data":"451f4bab74d641f0b415344bd0f9f45f49b43975fc46c42177b88bbf21165424"} Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.154970 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.160859 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8lqcl" Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.488991 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:48 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:48 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:48 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.489087 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:48 crc kubenswrapper[4727]: I0109 10:48:48.906682 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ppcsh" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.189426 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.237916 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 10:48:49 crc kubenswrapper[4727]: E0109 10:48:49.238712 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658c98ad-94ee-4294-a8b9-b2b041a83e37" containerName="pruner" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.238735 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="658c98ad-94ee-4294-a8b9-b2b041a83e37" containerName="pruner" Jan 09 10:48:49 crc kubenswrapper[4727]: E0109 10:48:49.238756 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" containerName="collect-profiles" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.238765 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" containerName="collect-profiles" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.238899 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="658c98ad-94ee-4294-a8b9-b2b041a83e37" containerName="pruner" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.238920 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" containerName="collect-profiles" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.239523 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.244119 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.247259 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.258387 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.317825 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") pod \"658c98ad-94ee-4294-a8b9-b2b041a83e37\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318275 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") pod \"658c98ad-94ee-4294-a8b9-b2b041a83e37\" (UID: \"658c98ad-94ee-4294-a8b9-b2b041a83e37\") " Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318409 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "658c98ad-94ee-4294-a8b9-b2b041a83e37" (UID: "658c98ad-94ee-4294-a8b9-b2b041a83e37"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318789 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.318849 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658c98ad-94ee-4294-a8b9-b2b041a83e37-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.348114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "658c98ad-94ee-4294-a8b9-b2b041a83e37" (UID: "658c98ad-94ee-4294-a8b9-b2b041a83e37"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.419442 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.419495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.419624 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658c98ad-94ee-4294-a8b9-b2b041a83e37-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.419683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.472145 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.485773 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:49 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:49 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:49 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.485867 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.587674 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.716464 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"658c98ad-94ee-4294-a8b9-b2b041a83e37","Type":"ContainerDied","Data":"a9e5e14ca11d9d94c3416a4d34194e4e50491ebe7c102192c37b8e67893ce2cd"} Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.716544 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e5e14ca11d9d94c3416a4d34194e4e50491ebe7c102192c37b8e67893ce2cd" Jan 09 10:48:49 crc kubenswrapper[4727]: I0109 10:48:49.716637 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 09 10:48:50 crc kubenswrapper[4727]: I0109 10:48:50.346759 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 09 10:48:50 crc kubenswrapper[4727]: W0109 10:48:50.370070 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd8a2cf2b_2d26_4698_8fe0_17170dd1d102.slice/crio-9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de WatchSource:0}: Error finding container 9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de: Status 404 returned error can't find the container with id 9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de Jan 09 10:48:50 crc kubenswrapper[4727]: I0109 10:48:50.487183 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:50 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:50 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:50 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:50 crc kubenswrapper[4727]: I0109 10:48:50.487253 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:50 crc kubenswrapper[4727]: I0109 10:48:50.763283 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8a2cf2b-2d26-4698-8fe0-17170dd1d102","Type":"ContainerStarted","Data":"9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de"} Jan 09 10:48:51 crc kubenswrapper[4727]: I0109 10:48:51.209030 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 09 10:48:51 crc kubenswrapper[4727]: I0109 10:48:51.486339 4727 patch_prober.go:28] interesting pod/router-default-5444994796-zcx2c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 09 10:48:51 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Jan 09 10:48:51 crc kubenswrapper[4727]: [+]process-running ok Jan 09 10:48:51 crc kubenswrapper[4727]: healthz check failed Jan 09 10:48:51 crc kubenswrapper[4727]: I0109 10:48:51.486414 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zcx2c" podUID="5789711a-8f11-41c1-ac8d-eb5e60d147a1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 10:48:51 crc kubenswrapper[4727]: I0109 10:48:51.794550 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8a2cf2b-2d26-4698-8fe0-17170dd1d102","Type":"ContainerStarted","Data":"223ad946131ca206a81a1d53ebb182247d8fab8b452b2d0d147d8e26b668f0e1"} Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.491601 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.495683 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zcx2c" Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.514246 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.514186929 podStartE2EDuration="3.514186929s" podCreationTimestamp="2026-01-09 10:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:48:51.836142626 +0000 UTC m=+177.286047407" watchObservedRunningTime="2026-01-09 10:48:52.514186929 +0000 UTC m=+177.964091710" Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.838642 4727 generic.go:334] "Generic (PLEG): container finished" podID="d8a2cf2b-2d26-4698-8fe0-17170dd1d102" containerID="223ad946131ca206a81a1d53ebb182247d8fab8b452b2d0d147d8e26b668f0e1" exitCode=0 Jan 09 10:48:52 crc kubenswrapper[4727]: I0109 10:48:52.839745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8a2cf2b-2d26-4698-8fe0-17170dd1d102","Type":"ContainerDied","Data":"223ad946131ca206a81a1d53ebb182247d8fab8b452b2d0d147d8e26b668f0e1"} Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.133982 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.134053 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.140172 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-5d9bz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.140272 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5d9bz" podUID="33b90f5a-a103-48d8-9eb1-fd7a153250ac" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.232565 4727 patch_prober.go:28] interesting pod/console-f9d7485db-pjc7c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 09 10:48:53 crc kubenswrapper[4727]: I0109 10:48:53.232624 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pjc7c" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 09 10:48:58 crc kubenswrapper[4727]: I0109 10:48:58.951971 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:48:58 crc kubenswrapper[4727]: I0109 10:48:58.952759 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerName="route-controller-manager" containerID="cri-o://7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879" gracePeriod=30 Jan 09 10:48:58 crc kubenswrapper[4727]: I0109 10:48:58.958287 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:48:58 crc kubenswrapper[4727]: I0109 10:48:58.958607 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" containerID="cri-o://cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a" gracePeriod=30 Jan 09 10:49:00 crc kubenswrapper[4727]: I0109 10:49:00.000687 4727 generic.go:334] "Generic (PLEG): container finished" podID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerID="7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879" exitCode=0 Jan 09 10:49:00 crc kubenswrapper[4727]: I0109 10:49:00.000794 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" event={"ID":"85ff3ef7-a005-4881-9004-73bc686b4aae","Type":"ContainerDied","Data":"7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879"} Jan 09 10:49:00 crc kubenswrapper[4727]: I0109 10:49:00.003816 4727 generic.go:334] "Generic (PLEG): container finished" podID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerID="cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a" exitCode=0 Jan 09 10:49:00 crc kubenswrapper[4727]: I0109 10:49:00.003847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" event={"ID":"b80bab42-ad32-4ec1-83c3-d939b007a97b","Type":"ContainerDied","Data":"cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a"} Jan 09 10:49:01 crc kubenswrapper[4727]: I0109 10:49:01.735276 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.230673 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.233923 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.269624 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:02 crc kubenswrapper[4727]: E0109 10:49:02.269988 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerName="route-controller-manager" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270005 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerName="route-controller-manager" Jan 09 10:49:02 crc kubenswrapper[4727]: E0109 10:49:02.270016 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a2cf2b-2d26-4698-8fe0-17170dd1d102" containerName="pruner" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270024 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a2cf2b-2d26-4698-8fe0-17170dd1d102" containerName="pruner" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270134 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a2cf2b-2d26-4698-8fe0-17170dd1d102" containerName="pruner" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270155 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" containerName="route-controller-manager" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.270784 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.291721 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.333758 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") pod \"85ff3ef7-a005-4881-9004-73bc686b4aae\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.333809 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") pod \"85ff3ef7-a005-4881-9004-73bc686b4aae\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.333866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") pod \"85ff3ef7-a005-4881-9004-73bc686b4aae\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.333984 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") pod \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.334060 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") pod \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\" (UID: \"d8a2cf2b-2d26-4698-8fe0-17170dd1d102\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.334111 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") pod \"85ff3ef7-a005-4881-9004-73bc686b4aae\" (UID: \"85ff3ef7-a005-4881-9004-73bc686b4aae\") " Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.334132 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d8a2cf2b-2d26-4698-8fe0-17170dd1d102" (UID: "d8a2cf2b-2d26-4698-8fe0-17170dd1d102"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.334410 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.335022 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca" (OuterVolumeSpecName: "client-ca") pod "85ff3ef7-a005-4881-9004-73bc686b4aae" (UID: "85ff3ef7-a005-4881-9004-73bc686b4aae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.335320 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config" (OuterVolumeSpecName: "config") pod "85ff3ef7-a005-4881-9004-73bc686b4aae" (UID: "85ff3ef7-a005-4881-9004-73bc686b4aae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.346882 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d8a2cf2b-2d26-4698-8fe0-17170dd1d102" (UID: "d8a2cf2b-2d26-4698-8fe0-17170dd1d102"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.347117 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85ff3ef7-a005-4881-9004-73bc686b4aae" (UID: "85ff3ef7-a005-4881-9004-73bc686b4aae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.347807 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj" (OuterVolumeSpecName: "kube-api-access-dxxfj") pod "85ff3ef7-a005-4881-9004-73bc686b4aae" (UID: "85ff3ef7-a005-4881-9004-73bc686b4aae"). InnerVolumeSpecName "kube-api-access-dxxfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435604 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435694 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435802 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435929 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8a2cf2b-2d26-4698-8fe0-17170dd1d102-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435945 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxxfj\" (UniqueName: \"kubernetes.io/projected/85ff3ef7-a005-4881-9004-73bc686b4aae-kube-api-access-dxxfj\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.435959 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.436034 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ff3ef7-a005-4881-9004-73bc686b4aae-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.436086 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ff3ef7-a005-4881-9004-73bc686b4aae-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.537274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.537352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.537421 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.537458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.539854 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.540390 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.547430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.558282 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") pod \"route-controller-manager-5ff8755c47-bpjj2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:02 crc kubenswrapper[4727]: I0109 10:49:02.589105 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.031340 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.032756 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw" event={"ID":"85ff3ef7-a005-4881-9004-73bc686b4aae","Type":"ContainerDied","Data":"b91fc4ab06ef577d9c4e0fad8710798e885460e768b3d9d37cb5205f9fe286fa"} Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.032795 4727 scope.go:117] "RemoveContainer" containerID="7ec219d37983c2725c1757f160954193b7d1612ed2321d5422d584a2c52e6879" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.034000 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d8a2cf2b-2d26-4698-8fe0-17170dd1d102","Type":"ContainerDied","Data":"9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de"} Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.034026 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec0cd2a9c619f88709485bdb2a7543b478d300cb90cfa569feafab0f0cfe6de" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.034069 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.060710 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.063804 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrrcw"] Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.145382 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5d9bz" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.268774 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:49:03 crc kubenswrapper[4727]: I0109 10:49:03.275714 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:49:04 crc kubenswrapper[4727]: I0109 10:49:04.868838 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ff3ef7-a005-4881-9004-73bc686b4aae" path="/var/lib/kubelet/pods/85ff3ef7-a005-4881-9004-73bc686b4aae/volumes" Jan 09 10:49:05 crc kubenswrapper[4727]: I0109 10:49:05.003668 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-75slj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 10:49:05 crc kubenswrapper[4727]: I0109 10:49:05.003747 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 10:49:09 crc kubenswrapper[4727]: I0109 10:49:09.405392 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:49:09 crc kubenswrapper[4727]: I0109 10:49:09.406285 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:49:13 crc kubenswrapper[4727]: I0109 10:49:13.752579 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7ll84" Jan 09 10:49:15 crc kubenswrapper[4727]: I0109 10:49:15.003725 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-75slj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded" start-of-body= Jan 09 10:49:15 crc kubenswrapper[4727]: I0109 10:49:15.003785 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded" Jan 09 10:49:19 crc kubenswrapper[4727]: I0109 10:49:19.003415 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:19 crc kubenswrapper[4727]: E0109 10:49:19.636460 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 09 10:49:19 crc kubenswrapper[4727]: E0109 10:49:19.637056 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shvml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d2hxb_openshift-marketplace(ee7a242f-7b69-4d13-bc60-f9c519d29024): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:19 crc kubenswrapper[4727]: E0109 10:49:19.638282 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d2hxb" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" Jan 09 10:49:22 crc kubenswrapper[4727]: E0109 10:49:22.370297 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d2hxb" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.003217 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-75slj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.003286 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.234070 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.235060 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.239141 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.239170 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.244399 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.409394 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.409963 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.510903 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.510961 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.511082 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.537718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:25 crc kubenswrapper[4727]: I0109 10:49:25.604850 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:27 crc kubenswrapper[4727]: E0109 10:49:27.434899 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 09 10:49:27 crc kubenswrapper[4727]: E0109 10:49:27.435143 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjjlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tlqjk_openshift-marketplace(847f9d70-de5c-4bc0-9823-c4074e353565): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:27 crc kubenswrapper[4727]: E0109 10:49:27.436347 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tlqjk" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.736597 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tlqjk" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.806308 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.806559 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f74xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lj7dw_openshift-marketplace(f7741215-a775-4b93-9062-45e620560d49): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.807785 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lj7dw" podUID="f7741215-a775-4b93-9062-45e620560d49" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.865865 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.914021 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:28 crc kubenswrapper[4727]: E0109 10:49:28.914472 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.914558 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.914750 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" containerName="controller-manager" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.915427 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:28 crc kubenswrapper[4727]: I0109 10:49:28.920137 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065088 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065184 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") pod \"b80bab42-ad32-4ec1-83c3-d939b007a97b\" (UID: \"b80bab42-ad32-4ec1-83c3-d939b007a97b\") " Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065582 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065620 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065644 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.065848 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.067595 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca" (OuterVolumeSpecName: "client-ca") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.067649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.067691 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config" (OuterVolumeSpecName: "config") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.074962 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.082314 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk" (OuterVolumeSpecName: "kube-api-access-vpmsk") pod "b80bab42-ad32-4ec1-83c3-d939b007a97b" (UID: "b80bab42-ad32-4ec1-83c3-d939b007a97b"). InnerVolumeSpecName "kube-api-access-vpmsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.167135 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168435 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168460 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168561 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168596 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168651 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168664 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168675 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80bab42-ad32-4ec1-83c3-d939b007a97b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168684 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80bab42-ad32-4ec1-83c3-d939b007a97b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168693 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpmsk\" (UniqueName: \"kubernetes.io/projected/b80bab42-ad32-4ec1-83c3-d939b007a97b-kube-api-access-vpmsk\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.168341 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.170467 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.171298 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.176728 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.186976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") pod \"controller-manager-5686478bb9-z9rcn\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.208123 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" event={"ID":"b80bab42-ad32-4ec1-83c3-d939b007a97b","Type":"ContainerDied","Data":"bf7c09a3701b9efda131588870469c1b6268f38bdcea1980699756debdae5027"} Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.208153 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-75slj" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.235851 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.252145 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:49:29 crc kubenswrapper[4727]: I0109 10:49:29.257518 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-75slj"] Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.430096 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.431001 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.454468 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.586749 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.586812 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.586866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688682 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688820 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688849 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.688923 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.708771 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.762152 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:49:30 crc kubenswrapper[4727]: I0109 10:49:30.876543 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b80bab42-ad32-4ec1-83c3-d939b007a97b" path="/var/lib/kubelet/pods/b80bab42-ad32-4ec1-83c3-d939b007a97b/volumes" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.540328 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lj7dw" podUID="f7741215-a775-4b93-9062-45e620560d49" Jan 09 10:49:32 crc kubenswrapper[4727]: I0109 10:49:32.568196 4727 scope.go:117] "RemoveContainer" containerID="cc187b580510a04e4f135688006730e9c726f008951a569b643c15ebf864f32a" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.631157 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.631943 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vk7rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dpfxv_openshift-marketplace(e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.633343 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dpfxv" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.654848 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.655103 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmdl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qdwnw_openshift-marketplace(db9e6995-13ec-46a4-a659-0acc617449d3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 10:49:32 crc kubenswrapper[4727]: E0109 10:49:32.659075 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qdwnw" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.016497 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 09 10:49:33 crc kubenswrapper[4727]: W0109 10:49:33.026585 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8f187469_eca7_43d1_80a1_5b67f7aff838.slice/crio-ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4 WatchSource:0}: Error finding container ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4: Status 404 returned error can't find the container with id ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.133526 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.139877 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.155534 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:33 crc kubenswrapper[4727]: W0109 10:49:33.189723 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151a8455_f0b6_44d2_a258_0d7a23683e88.slice/crio-eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8 WatchSource:0}: Error finding container eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8: Status 404 returned error can't find the container with id eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8 Jan 09 10:49:33 crc kubenswrapper[4727]: W0109 10:49:33.192459 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod064897bd_61aa_4547_8de9_14abed17dad2.slice/crio-8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9 WatchSource:0}: Error finding container 8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9: Status 404 returned error can't find the container with id 8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.253805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" event={"ID":"151a8455-f0b6-44d2-a258-0d7a23683e88","Type":"ContainerStarted","Data":"eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.255258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" event={"ID":"064897bd-61aa-4547-8de9-14abed17dad2","Type":"ContainerStarted","Data":"8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.256158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f187469-eca7-43d1-80a1-5b67f7aff838","Type":"ContainerStarted","Data":"ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.258177 4727 generic.go:334] "Generic (PLEG): container finished" podID="52829665-e7e7-4322-a38e-731d67de0a1e" containerID="365d92d81408d60ec382bc6ab0b4a9e0d23f934158b015c99820128061ced4a5" exitCode=0 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.258244 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerDied","Data":"365d92d81408d60ec382bc6ab0b4a9e0d23f934158b015c99820128061ced4a5"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.266425 4727 generic.go:334] "Generic (PLEG): container finished" podID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerID="7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee" exitCode=0 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.266480 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerDied","Data":"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.269933 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9","Type":"ContainerStarted","Data":"cf4cf680cbea13629ffb9b6b950ebd32261c06a8a59c690f1c39f7cb05418444"} Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.272957 4727 generic.go:334] "Generic (PLEG): container finished" podID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerID="abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568" exitCode=0 Jan 09 10:49:33 crc kubenswrapper[4727]: I0109 10:49:33.273078 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerDied","Data":"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568"} Jan 09 10:49:33 crc kubenswrapper[4727]: E0109 10:49:33.274239 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qdwnw" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" Jan 09 10:49:33 crc kubenswrapper[4727]: E0109 10:49:33.274462 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dpfxv" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.285316 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f187469-eca7-43d1-80a1-5b67f7aff838","Type":"ContainerStarted","Data":"6db409e85d88995423280632c4625000e42915184376c39b6a7a5ad209ecd5b5"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.289439 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" event={"ID":"151a8455-f0b6-44d2-a258-0d7a23683e88","Type":"ContainerStarted","Data":"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.290705 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.293751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerStarted","Data":"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.296746 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" podUID="064897bd-61aa-4547-8de9-14abed17dad2" containerName="route-controller-manager" containerID="cri-o://90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" gracePeriod=30 Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.296969 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" event={"ID":"064897bd-61aa-4547-8de9-14abed17dad2","Type":"ContainerStarted","Data":"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.297220 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.297239 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.300159 4727 generic.go:334] "Generic (PLEG): container finished" podID="4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" containerID="baf98b7c04c8a65d35f9b312da3e7cc77bcd1a0ca0d075f57a151a8fb7edda1a" exitCode=0 Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.300237 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9","Type":"ContainerDied","Data":"baf98b7c04c8a65d35f9b312da3e7cc77bcd1a0ca0d075f57a151a8fb7edda1a"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.303070 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerStarted","Data":"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.305528 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.305634 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerStarted","Data":"64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be"} Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.344013 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.343978902 podStartE2EDuration="4.343978902s" podCreationTimestamp="2026-01-09 10:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:49:34.317075679 +0000 UTC m=+219.766980480" watchObservedRunningTime="2026-01-09 10:49:34.343978902 +0000 UTC m=+219.793883683" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.364631 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pgnj5" podStartSLOduration=4.259943759 podStartE2EDuration="52.364608863s" podCreationTimestamp="2026-01-09 10:48:42 +0000 UTC" firstStartedPulling="2026-01-09 10:48:45.58703087 +0000 UTC m=+171.036935651" lastFinishedPulling="2026-01-09 10:49:33.691695974 +0000 UTC m=+219.141600755" observedRunningTime="2026-01-09 10:49:34.341751514 +0000 UTC m=+219.791656295" watchObservedRunningTime="2026-01-09 10:49:34.364608863 +0000 UTC m=+219.814513644" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.384570 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dtgwm" podStartSLOduration=3.038206464 podStartE2EDuration="52.384552076s" podCreationTimestamp="2026-01-09 10:48:42 +0000 UTC" firstStartedPulling="2026-01-09 10:48:44.493933453 +0000 UTC m=+169.943838235" lastFinishedPulling="2026-01-09 10:49:33.840279076 +0000 UTC m=+219.290183847" observedRunningTime="2026-01-09 10:49:34.368212322 +0000 UTC m=+219.818117103" watchObservedRunningTime="2026-01-09 10:49:34.384552076 +0000 UTC m=+219.834456857" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.409233 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qzjvr" podStartSLOduration=3.918587048 podStartE2EDuration="54.409209269s" podCreationTimestamp="2026-01-09 10:48:40 +0000 UTC" firstStartedPulling="2026-01-09 10:48:43.280404305 +0000 UTC m=+168.730309086" lastFinishedPulling="2026-01-09 10:49:33.771026516 +0000 UTC m=+219.220931307" observedRunningTime="2026-01-09 10:49:34.408376084 +0000 UTC m=+219.858280875" watchObservedRunningTime="2026-01-09 10:49:34.409209269 +0000 UTC m=+219.859114050" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.432062 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" podStartSLOduration=16.432039358 podStartE2EDuration="16.432039358s" podCreationTimestamp="2026-01-09 10:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:49:34.428244553 +0000 UTC m=+219.878149334" watchObservedRunningTime="2026-01-09 10:49:34.432039358 +0000 UTC m=+219.881944149" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.472325 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" podStartSLOduration=35.472294972 podStartE2EDuration="35.472294972s" podCreationTimestamp="2026-01-09 10:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:49:34.467990552 +0000 UTC m=+219.917895353" watchObservedRunningTime="2026-01-09 10:49:34.472294972 +0000 UTC m=+219.922199753" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.823902 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.881483 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:34 crc kubenswrapper[4727]: E0109 10:49:34.882018 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064897bd-61aa-4547-8de9-14abed17dad2" containerName="route-controller-manager" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.882080 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="064897bd-61aa-4547-8de9-14abed17dad2" containerName="route-controller-manager" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.882240 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="064897bd-61aa-4547-8de9-14abed17dad2" containerName="route-controller-manager" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.883151 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.884037 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.953696 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") pod \"064897bd-61aa-4547-8de9-14abed17dad2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.954247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") pod \"064897bd-61aa-4547-8de9-14abed17dad2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.954409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") pod \"064897bd-61aa-4547-8de9-14abed17dad2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.954628 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") pod \"064897bd-61aa-4547-8de9-14abed17dad2\" (UID: \"064897bd-61aa-4547-8de9-14abed17dad2\") " Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.955001 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config" (OuterVolumeSpecName: "config") pod "064897bd-61aa-4547-8de9-14abed17dad2" (UID: "064897bd-61aa-4547-8de9-14abed17dad2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.955157 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.955669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca" (OuterVolumeSpecName: "client-ca") pod "064897bd-61aa-4547-8de9-14abed17dad2" (UID: "064897bd-61aa-4547-8de9-14abed17dad2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.962158 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6" (OuterVolumeSpecName: "kube-api-access-lr2t6") pod "064897bd-61aa-4547-8de9-14abed17dad2" (UID: "064897bd-61aa-4547-8de9-14abed17dad2"). InnerVolumeSpecName "kube-api-access-lr2t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:34 crc kubenswrapper[4727]: I0109 10:49:34.964444 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "064897bd-61aa-4547-8de9-14abed17dad2" (UID: "064897bd-61aa-4547-8de9-14abed17dad2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056086 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056235 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056297 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/064897bd-61aa-4547-8de9-14abed17dad2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056312 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr2t6\" (UniqueName: \"kubernetes.io/projected/064897bd-61aa-4547-8de9-14abed17dad2-kube-api-access-lr2t6\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.056323 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/064897bd-61aa-4547-8de9-14abed17dad2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.157915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.158037 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.158096 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.158130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.159696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.159805 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.166453 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.179499 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") pod \"route-controller-manager-579db6f576-7qp6j\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.209893 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335320 4727 generic.go:334] "Generic (PLEG): container finished" podID="064897bd-61aa-4547-8de9-14abed17dad2" containerID="90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" exitCode=0 Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335436 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335428 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" event={"ID":"064897bd-61aa-4547-8de9-14abed17dad2","Type":"ContainerDied","Data":"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4"} Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335908 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2" event={"ID":"064897bd-61aa-4547-8de9-14abed17dad2","Type":"ContainerDied","Data":"8e7548827c8d19db37b7c74e95906a56aed2db797dac368b45eb0186eeab54c9"} Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.335937 4727 scope.go:117] "RemoveContainer" containerID="90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.376501 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.384080 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-bpjj2"] Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.400968 4727 scope.go:117] "RemoveContainer" containerID="90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" Jan 09 10:49:35 crc kubenswrapper[4727]: E0109 10:49:35.401495 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4\": container with ID starting with 90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4 not found: ID does not exist" containerID="90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.401568 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4"} err="failed to get container status \"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4\": rpc error: code = NotFound desc = could not find container \"90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4\": container with ID starting with 90c2639d20734277dcfb438af21aea69b26faf65926485b77a348c39c94665e4 not found: ID does not exist" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.669767 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.725778 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.768839 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") pod \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.769369 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") pod \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\" (UID: \"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9\") " Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.770660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" (UID: "4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.777120 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" (UID: "4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.871867 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:35 crc kubenswrapper[4727]: I0109 10:49:35.871923 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.346236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9","Type":"ContainerDied","Data":"cf4cf680cbea13629ffb9b6b950ebd32261c06a8a59c690f1c39f7cb05418444"} Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.346299 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4cf680cbea13629ffb9b6b950ebd32261c06a8a59c690f1c39f7cb05418444" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.346440 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.355157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" event={"ID":"dff45936-afc6-4df6-9cdd-f813330be05a","Type":"ContainerStarted","Data":"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca"} Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.355229 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" event={"ID":"dff45936-afc6-4df6-9cdd-f813330be05a","Type":"ContainerStarted","Data":"986f3af8c1633fddbf062a038327d6da7e29234701c7af0c999ffe1885a1ca72"} Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.357840 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.434018 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.483161 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" podStartSLOduration=17.483135633 podStartE2EDuration="17.483135633s" podCreationTimestamp="2026-01-09 10:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:49:36.385851868 +0000 UTC m=+221.835756669" watchObservedRunningTime="2026-01-09 10:49:36.483135633 +0000 UTC m=+221.933040414" Jan 09 10:49:36 crc kubenswrapper[4727]: I0109 10:49:36.868289 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064897bd-61aa-4547-8de9-14abed17dad2" path="/var/lib/kubelet/pods/064897bd-61aa-4547-8de9-14abed17dad2/volumes" Jan 09 10:49:38 crc kubenswrapper[4727]: I0109 10:49:38.369139 4727 generic.go:334] "Generic (PLEG): container finished" podID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerID="7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72" exitCode=0 Jan 09 10:49:38 crc kubenswrapper[4727]: I0109 10:49:38.369245 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerDied","Data":"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72"} Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.405030 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.405452 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.405533 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.406273 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 10:49:39 crc kubenswrapper[4727]: I0109 10:49:39.406330 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827" gracePeriod=600 Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.382801 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827" exitCode=0 Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.382889 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827"} Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.785455 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.785552 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:49:40 crc kubenswrapper[4727]: I0109 10:49:40.875351 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.392315 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerStarted","Data":"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28"} Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.394962 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90"} Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.422230 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2hxb" podStartSLOduration=4.647978733 podStartE2EDuration="1m1.42220498s" podCreationTimestamp="2026-01-09 10:48:40 +0000 UTC" firstStartedPulling="2026-01-09 10:48:43.258803456 +0000 UTC m=+168.708708237" lastFinishedPulling="2026-01-09 10:49:40.033029713 +0000 UTC m=+225.482934484" observedRunningTime="2026-01-09 10:49:41.420217481 +0000 UTC m=+226.870122272" watchObservedRunningTime="2026-01-09 10:49:41.42220498 +0000 UTC m=+226.872109761" Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.468546 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:49:41 crc kubenswrapper[4727]: I0109 10:49:41.516467 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:49:42 crc kubenswrapper[4727]: I0109 10:49:42.925670 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:49:42 crc kubenswrapper[4727]: I0109 10:49:42.926153 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:49:42 crc kubenswrapper[4727]: I0109 10:49:42.983755 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.401814 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.401887 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.445542 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.458346 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:49:43 crc kubenswrapper[4727]: I0109 10:49:43.524323 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:45 crc kubenswrapper[4727]: I0109 10:49:45.427381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerStarted","Data":"020d5eaa11f03b69c9e84a3c6f747b9646ac5bd4933aa199761865a7855eca7b"} Jan 09 10:49:45 crc kubenswrapper[4727]: I0109 10:49:45.491903 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:49:45 crc kubenswrapper[4727]: I0109 10:49:45.492192 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pgnj5" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="registry-server" containerID="cri-o://64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be" gracePeriod=2 Jan 09 10:49:46 crc kubenswrapper[4727]: I0109 10:49:46.441918 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerDied","Data":"020d5eaa11f03b69c9e84a3c6f747b9646ac5bd4933aa199761865a7855eca7b"} Jan 09 10:49:46 crc kubenswrapper[4727]: I0109 10:49:46.441926 4727 generic.go:334] "Generic (PLEG): container finished" podID="847f9d70-de5c-4bc0-9823-c4074e353565" containerID="020d5eaa11f03b69c9e84a3c6f747b9646ac5bd4933aa199761865a7855eca7b" exitCode=0 Jan 09 10:49:47 crc kubenswrapper[4727]: I0109 10:49:47.454330 4727 generic.go:334] "Generic (PLEG): container finished" podID="52829665-e7e7-4322-a38e-731d67de0a1e" containerID="64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be" exitCode=0 Jan 09 10:49:47 crc kubenswrapper[4727]: I0109 10:49:47.454375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerDied","Data":"64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be"} Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.249628 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.282291 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") pod \"52829665-e7e7-4322-a38e-731d67de0a1e\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.282424 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") pod \"52829665-e7e7-4322-a38e-731d67de0a1e\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.282539 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") pod \"52829665-e7e7-4322-a38e-731d67de0a1e\" (UID: \"52829665-e7e7-4322-a38e-731d67de0a1e\") " Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.283301 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities" (OuterVolumeSpecName: "utilities") pod "52829665-e7e7-4322-a38e-731d67de0a1e" (UID: "52829665-e7e7-4322-a38e-731d67de0a1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.294745 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc" (OuterVolumeSpecName: "kube-api-access-k5hbc") pod "52829665-e7e7-4322-a38e-731d67de0a1e" (UID: "52829665-e7e7-4322-a38e-731d67de0a1e"). InnerVolumeSpecName "kube-api-access-k5hbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.321272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52829665-e7e7-4322-a38e-731d67de0a1e" (UID: "52829665-e7e7-4322-a38e-731d67de0a1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.391194 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.391247 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5hbc\" (UniqueName: \"kubernetes.io/projected/52829665-e7e7-4322-a38e-731d67de0a1e-kube-api-access-k5hbc\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.391263 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52829665-e7e7-4322-a38e-731d67de0a1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.462355 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgnj5" event={"ID":"52829665-e7e7-4322-a38e-731d67de0a1e","Type":"ContainerDied","Data":"301dab3d04bf736cfc1cfc161435219d3d49e05da644c5b2c0bdb5bb934e1806"} Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.462430 4727 scope.go:117] "RemoveContainer" containerID="64e01181a8ae5e6817daa53bcc72a913e626ccf8c7869c6f77c6ac612ee853be" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.462461 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgnj5" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.494327 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.497534 4727 scope.go:117] "RemoveContainer" containerID="365d92d81408d60ec382bc6ab0b4a9e0d23f934158b015c99820128061ced4a5" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.498650 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgnj5"] Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.530595 4727 scope.go:117] "RemoveContainer" containerID="22ac19595fc4f0a184b8660c25bad2c44186a8659978bbc2fc9d9b604da4ef99" Jan 09 10:49:48 crc kubenswrapper[4727]: I0109 10:49:48.871396 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" path="/var/lib/kubelet/pods/52829665-e7e7-4322-a38e-731d67de0a1e/volumes" Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.499841 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerStarted","Data":"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b"} Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.503472 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerStarted","Data":"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc"} Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.507502 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerStarted","Data":"0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607"} Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.509979 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerStarted","Data":"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2"} Jan 09 10:49:50 crc kubenswrapper[4727]: I0109 10:49:50.584557 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tlqjk" podStartSLOduration=4.301560359 podStartE2EDuration="1m9.58452158s" podCreationTimestamp="2026-01-09 10:48:41 +0000 UTC" firstStartedPulling="2026-01-09 10:48:44.538930722 +0000 UTC m=+169.988835493" lastFinishedPulling="2026-01-09 10:49:49.821891933 +0000 UTC m=+235.271796714" observedRunningTime="2026-01-09 10:49:50.580107287 +0000 UTC m=+236.030012088" watchObservedRunningTime="2026-01-09 10:49:50.58452158 +0000 UTC m=+236.034426371" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.080030 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.080109 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.121954 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.518744 4727 generic.go:334] "Generic (PLEG): container finished" podID="f7741215-a775-4b93-9062-45e620560d49" containerID="53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2" exitCode=0 Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.518846 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerDied","Data":"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2"} Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.522881 4727 generic.go:334] "Generic (PLEG): container finished" podID="db9e6995-13ec-46a4-a659-0acc617449d3" containerID="2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b" exitCode=0 Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.522972 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerDied","Data":"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b"} Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.532758 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerID="f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc" exitCode=0 Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.532878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerDied","Data":"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc"} Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.546112 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.546188 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:49:51 crc kubenswrapper[4727]: I0109 10:49:51.584063 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.543813 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerStarted","Data":"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43"} Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.565864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerStarted","Data":"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1"} Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.570854 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerStarted","Data":"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d"} Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.594317 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qdwnw" podStartSLOduration=3.241966548 podStartE2EDuration="1m8.594286518s" podCreationTimestamp="2026-01-09 10:48:44 +0000 UTC" firstStartedPulling="2026-01-09 10:48:46.639869395 +0000 UTC m=+172.089774176" lastFinishedPulling="2026-01-09 10:49:51.992189365 +0000 UTC m=+237.442094146" observedRunningTime="2026-01-09 10:49:52.590222305 +0000 UTC m=+238.040127086" watchObservedRunningTime="2026-01-09 10:49:52.594286518 +0000 UTC m=+238.044191299" Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.597725 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tlqjk" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" probeResult="failure" output=< Jan 09 10:49:52 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:49:52 crc kubenswrapper[4727]: > Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.644098 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dpfxv" podStartSLOduration=4.187093464 podStartE2EDuration="1m9.64407039s" podCreationTimestamp="2026-01-09 10:48:43 +0000 UTC" firstStartedPulling="2026-01-09 10:48:46.649319431 +0000 UTC m=+172.099224212" lastFinishedPulling="2026-01-09 10:49:52.106296357 +0000 UTC m=+237.556201138" observedRunningTime="2026-01-09 10:49:52.617324704 +0000 UTC m=+238.067229495" watchObservedRunningTime="2026-01-09 10:49:52.64407039 +0000 UTC m=+238.093975171" Jan 09 10:49:52 crc kubenswrapper[4727]: I0109 10:49:52.644723 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lj7dw" podStartSLOduration=5.311533839 podStartE2EDuration="1m12.644717069s" podCreationTimestamp="2026-01-09 10:48:40 +0000 UTC" firstStartedPulling="2026-01-09 10:48:44.631731232 +0000 UTC m=+170.081636013" lastFinishedPulling="2026-01-09 10:49:51.964914462 +0000 UTC m=+237.414819243" observedRunningTime="2026-01-09 10:49:52.640853153 +0000 UTC m=+238.090757934" watchObservedRunningTime="2026-01-09 10:49:52.644717069 +0000 UTC m=+238.094621850" Jan 09 10:49:53 crc kubenswrapper[4727]: I0109 10:49:53.896476 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:49:53 crc kubenswrapper[4727]: I0109 10:49:53.897274 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2hxb" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="registry-server" containerID="cri-o://3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" gracePeriod=2 Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.438798 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.438891 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.491159 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.589422 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") pod \"ee7a242f-7b69-4d13-bc60-f9c519d29024\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.589763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") pod \"ee7a242f-7b69-4d13-bc60-f9c519d29024\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.589797 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") pod \"ee7a242f-7b69-4d13-bc60-f9c519d29024\" (UID: \"ee7a242f-7b69-4d13-bc60-f9c519d29024\") " Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.591731 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities" (OuterVolumeSpecName: "utilities") pod "ee7a242f-7b69-4d13-bc60-f9c519d29024" (UID: "ee7a242f-7b69-4d13-bc60-f9c519d29024"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.596867 4727 generic.go:334] "Generic (PLEG): container finished" podID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerID="3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" exitCode=0 Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.597304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerDied","Data":"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28"} Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.597348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2hxb" event={"ID":"ee7a242f-7b69-4d13-bc60-f9c519d29024","Type":"ContainerDied","Data":"fdad070e71d4bbce550062d735b7d4a59eda1ba60bd27a561289a761c73ac4de"} Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.597372 4727 scope.go:117] "RemoveContainer" containerID="3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.597547 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2hxb" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.599387 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml" (OuterVolumeSpecName: "kube-api-access-shvml") pod "ee7a242f-7b69-4d13-bc60-f9c519d29024" (UID: "ee7a242f-7b69-4d13-bc60-f9c519d29024"). InnerVolumeSpecName "kube-api-access-shvml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.631660 4727 scope.go:117] "RemoveContainer" containerID="7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.649085 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee7a242f-7b69-4d13-bc60-f9c519d29024" (UID: "ee7a242f-7b69-4d13-bc60-f9c519d29024"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.655308 4727 scope.go:117] "RemoveContainer" containerID="d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.682956 4727 scope.go:117] "RemoveContainer" containerID="3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" Jan 09 10:49:54 crc kubenswrapper[4727]: E0109 10:49:54.683713 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28\": container with ID starting with 3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28 not found: ID does not exist" containerID="3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.683814 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28"} err="failed to get container status \"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28\": rpc error: code = NotFound desc = could not find container \"3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28\": container with ID starting with 3cc422e2ffdab14beac01f433be762fea7697e102c19176fa095148a479dab28 not found: ID does not exist" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.683888 4727 scope.go:117] "RemoveContainer" containerID="7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72" Jan 09 10:49:54 crc kubenswrapper[4727]: E0109 10:49:54.684367 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72\": container with ID starting with 7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72 not found: ID does not exist" containerID="7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.684424 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72"} err="failed to get container status \"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72\": rpc error: code = NotFound desc = could not find container \"7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72\": container with ID starting with 7cc1407705b9269d980b7a8f5854447f8387736aeb5138861234d9a4bbe78c72 not found: ID does not exist" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.684445 4727 scope.go:117] "RemoveContainer" containerID="d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e" Jan 09 10:49:54 crc kubenswrapper[4727]: E0109 10:49:54.685718 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e\": container with ID starting with d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e not found: ID does not exist" containerID="d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.685754 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e"} err="failed to get container status \"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e\": rpc error: code = NotFound desc = could not find container \"d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e\": container with ID starting with d0918d2ec046342f98f484e4c62a51d02c0c754d985c4f9c8c7f8f3108bc163e not found: ID does not exist" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.690647 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.690681 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee7a242f-7b69-4d13-bc60-f9c519d29024-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.690696 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shvml\" (UniqueName: \"kubernetes.io/projected/ee7a242f-7b69-4d13-bc60-f9c519d29024-kube-api-access-shvml\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.721596 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.721665 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.939898 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:49:54 crc kubenswrapper[4727]: I0109 10:49:54.945227 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2hxb"] Jan 09 10:49:55 crc kubenswrapper[4727]: I0109 10:49:55.485617 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dpfxv" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" probeResult="failure" output=< Jan 09 10:49:55 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:49:55 crc kubenswrapper[4727]: > Jan 09 10:49:55 crc kubenswrapper[4727]: I0109 10:49:55.763880 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qdwnw" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" probeResult="failure" output=< Jan 09 10:49:55 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:49:55 crc kubenswrapper[4727]: > Jan 09 10:49:56 crc kubenswrapper[4727]: I0109 10:49:56.867424 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" path="/var/lib/kubelet/pods/ee7a242f-7b69-4d13-bc60-f9c519d29024/volumes" Jan 09 10:49:58 crc kubenswrapper[4727]: I0109 10:49:58.908734 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:58 crc kubenswrapper[4727]: I0109 10:49:58.909562 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" containerID="cri-o://b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" gracePeriod=30 Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.015055 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.015406 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" containerName="route-controller-manager" containerID="cri-o://acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" gracePeriod=30 Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.522478 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.529048 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582748 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") pod \"dff45936-afc6-4df6-9cdd-f813330be05a\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582824 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") pod \"dff45936-afc6-4df6-9cdd-f813330be05a\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582882 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582912 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") pod \"dff45936-afc6-4df6-9cdd-f813330be05a\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582957 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.582995 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") pod \"dff45936-afc6-4df6-9cdd-f813330be05a\" (UID: \"dff45936-afc6-4df6-9cdd-f813330be05a\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.583048 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.583102 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.583128 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") pod \"151a8455-f0b6-44d2-a258-0d7a23683e88\" (UID: \"151a8455-f0b6-44d2-a258-0d7a23683e88\") " Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.583866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca" (OuterVolumeSpecName: "client-ca") pod "dff45936-afc6-4df6-9cdd-f813330be05a" (UID: "dff45936-afc6-4df6-9cdd-f813330be05a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.584313 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config" (OuterVolumeSpecName: "config") pod "dff45936-afc6-4df6-9cdd-f813330be05a" (UID: "dff45936-afc6-4df6-9cdd-f813330be05a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.584738 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca" (OuterVolumeSpecName: "client-ca") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.584809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.584944 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config" (OuterVolumeSpecName: "config") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.589363 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j" (OuterVolumeSpecName: "kube-api-access-w8g5j") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "kube-api-access-w8g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.590139 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dff45936-afc6-4df6-9cdd-f813330be05a" (UID: "dff45936-afc6-4df6-9cdd-f813330be05a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.590731 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "151a8455-f0b6-44d2-a258-0d7a23683e88" (UID: "151a8455-f0b6-44d2-a258-0d7a23683e88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.591633 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl" (OuterVolumeSpecName: "kube-api-access-lrdbl") pod "dff45936-afc6-4df6-9cdd-f813330be05a" (UID: "dff45936-afc6-4df6-9cdd-f813330be05a"). InnerVolumeSpecName "kube-api-access-lrdbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.630955 4727 generic.go:334] "Generic (PLEG): container finished" podID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerID="b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" exitCode=0 Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.631054 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.631047 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" event={"ID":"151a8455-f0b6-44d2-a258-0d7a23683e88","Type":"ContainerDied","Data":"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a"} Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.631151 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" event={"ID":"151a8455-f0b6-44d2-a258-0d7a23683e88","Type":"ContainerDied","Data":"eb4e52599368aa758622cd1450775a1cc73b50f71ee1bc4abd2868bc446f36c8"} Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.631190 4727 scope.go:117] "RemoveContainer" containerID="b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.632681 4727 generic.go:334] "Generic (PLEG): container finished" podID="dff45936-afc6-4df6-9cdd-f813330be05a" containerID="acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" exitCode=0 Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.632728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" event={"ID":"dff45936-afc6-4df6-9cdd-f813330be05a","Type":"ContainerDied","Data":"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca"} Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.632761 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" event={"ID":"dff45936-afc6-4df6-9cdd-f813330be05a","Type":"ContainerDied","Data":"986f3af8c1633fddbf062a038327d6da7e29234701c7af0c999ffe1885a1ca72"} Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.632804 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.659043 4727 scope.go:117] "RemoveContainer" containerID="b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" Jan 09 10:49:59 crc kubenswrapper[4727]: E0109 10:49:59.659466 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a\": container with ID starting with b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a not found: ID does not exist" containerID="b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.659516 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a"} err="failed to get container status \"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a\": rpc error: code = NotFound desc = could not find container \"b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a\": container with ID starting with b1cc8b9cb7eeee14c048aa730fd9c45ee8ce5b20b7e7dde137abb7e9c7e7d87a not found: ID does not exist" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.659547 4727 scope.go:117] "RemoveContainer" containerID="acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.673080 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.675808 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5686478bb9-z9rcn"] Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.682291 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686371 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff45936-afc6-4df6-9cdd-f813330be05a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686412 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686425 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686475 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8g5j\" (UniqueName: \"kubernetes.io/projected/151a8455-f0b6-44d2-a258-0d7a23683e88-kube-api-access-w8g5j\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686498 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686527 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrdbl\" (UniqueName: \"kubernetes.io/projected/dff45936-afc6-4df6-9cdd-f813330be05a-kube-api-access-lrdbl\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686538 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/151a8455-f0b6-44d2-a258-0d7a23683e88-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686618 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff45936-afc6-4df6-9cdd-f813330be05a-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.686627 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151a8455-f0b6-44d2-a258-0d7a23683e88-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.688647 4727 scope.go:117] "RemoveContainer" containerID="acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" Jan 09 10:49:59 crc kubenswrapper[4727]: E0109 10:49:59.689323 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca\": container with ID starting with acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca not found: ID does not exist" containerID="acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.689387 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca"} err="failed to get container status \"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca\": rpc error: code = NotFound desc = could not find container \"acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca\": container with ID starting with acf6d05b5c1b7698c4c740ad35f87492b9b0136ebe0278321b6c18bd426bd5ca not found: ID does not exist" Jan 09 10:49:59 crc kubenswrapper[4727]: I0109 10:49:59.690213 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-579db6f576-7qp6j"] Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.237550 4727 patch_prober.go:28] interesting pod/controller-manager-5686478bb9-z9rcn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.237685 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5686478bb9-z9rcn" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638181 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638559 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="extract-content" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638579 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="extract-content" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638592 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638600 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638615 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" containerName="pruner" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638623 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" containerName="pruner" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638637 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="extract-utilities" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638645 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="extract-utilities" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638659 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="extract-utilities" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638667 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="extract-utilities" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638676 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="extract-content" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638684 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="extract-content" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638699 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" containerName="route-controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638706 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" containerName="route-controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638716 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638723 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: E0109 10:50:00.638734 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638742 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638879 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf2eb8d-a74f-46e5-9fbc-7ccb295ab0b9" containerName="pruner" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638896 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee7a242f-7b69-4d13-bc60-f9c519d29024" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638907 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" containerName="route-controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638917 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" containerName="controller-manager" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.638931 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="52829665-e7e7-4322-a38e-731d67de0a1e" containerName="registry-server" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.639640 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.642618 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.642769 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.642786 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.642948 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.644292 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.644564 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.646308 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.647412 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.651033 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.651266 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.651713 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.651786 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.652702 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.652935 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.656013 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.657554 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.659152 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700360 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700457 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700546 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700571 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700608 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.700668 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802521 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802642 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802719 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802735 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802755 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.802775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.804241 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.804287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.804339 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.804536 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.805498 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.812441 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.812593 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.821935 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") pod \"route-controller-manager-54b8fd498d-tp6j4\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.823172 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") pod \"controller-manager-5cc9fbd87d-grnvl\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.868296 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151a8455-f0b6-44d2-a258-0d7a23683e88" path="/var/lib/kubelet/pods/151a8455-f0b6-44d2-a258-0d7a23683e88/volumes" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.869759 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff45936-afc6-4df6-9cdd-f813330be05a" path="/var/lib/kubelet/pods/dff45936-afc6-4df6-9cdd-f813330be05a/volumes" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.962087 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:00 crc kubenswrapper[4727]: I0109 10:50:00.971133 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.175803 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.241457 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:50:01 crc kubenswrapper[4727]: W0109 10:50:01.251557 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a81417_459b_4cd9_9be8_d04ac04682e3.slice/crio-66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72 WatchSource:0}: Error finding container 66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72: Status 404 returned error can't find the container with id 66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72 Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.292071 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.292154 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.357324 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.589416 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.631691 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.656371 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" event={"ID":"69a81417-459b-4cd9-9be8-d04ac04682e3","Type":"ContainerStarted","Data":"66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72"} Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.658309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" event={"ID":"47b307d6-5374-4c43-af7a-57c97019e1a4","Type":"ContainerStarted","Data":"bb2f20dd9c688c9d9ca339c2135912218e93eba35cb1a8cb66863cd0423ab406"} Jan 09 10:50:01 crc kubenswrapper[4727]: I0109 10:50:01.696609 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.664260 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" event={"ID":"69a81417-459b-4cd9-9be8-d04ac04682e3","Type":"ContainerStarted","Data":"6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9"} Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.667140 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.669785 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" event={"ID":"47b307d6-5374-4c43-af7a-57c97019e1a4","Type":"ContainerStarted","Data":"ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0"} Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.669959 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.683167 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.689314 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" podStartSLOduration=3.689281444 podStartE2EDuration="3.689281444s" podCreationTimestamp="2026-01-09 10:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:50:02.685308594 +0000 UTC m=+248.135213365" watchObservedRunningTime="2026-01-09 10:50:02.689281444 +0000 UTC m=+248.139186225" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.710829 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" podStartSLOduration=4.710801473 podStartE2EDuration="4.710801473s" podCreationTimestamp="2026-01-09 10:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:50:02.707807113 +0000 UTC m=+248.157711894" watchObservedRunningTime="2026-01-09 10:50:02.710801473 +0000 UTC m=+248.160706254" Jan 09 10:50:02 crc kubenswrapper[4727]: I0109 10:50:02.780430 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:03 crc kubenswrapper[4727]: I0109 10:50:03.890659 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:50:03 crc kubenswrapper[4727]: I0109 10:50:03.891303 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tlqjk" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" containerID="cri-o://0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607" gracePeriod=2 Jan 09 10:50:04 crc kubenswrapper[4727]: I0109 10:50:04.479472 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:50:04 crc kubenswrapper[4727]: I0109 10:50:04.523257 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:50:04 crc kubenswrapper[4727]: I0109 10:50:04.762104 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:50:04 crc kubenswrapper[4727]: I0109 10:50:04.810649 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:50:06 crc kubenswrapper[4727]: I0109 10:50:06.294791 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:50:06 crc kubenswrapper[4727]: I0109 10:50:06.545600 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" containerID="cri-o://3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" gracePeriod=15 Jan 09 10:50:06 crc kubenswrapper[4727]: I0109 10:50:06.692412 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qdwnw" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" containerID="cri-o://a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" gracePeriod=2 Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.532092 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619782 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619885 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.619928 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620008 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620069 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620279 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620322 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620358 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620401 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620478 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.620569 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") pod \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\" (UID: \"01aaae54-a546-4083-88ea-d3adc6a3ea7e\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.622880 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.623125 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.623488 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.623525 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.623572 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.629788 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9" (OuterVolumeSpecName: "kube-api-access-phph9") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "kube-api-access-phph9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.630183 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.630888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631351 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631407 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631384 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.631806 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.632984 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.633086 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "01aaae54-a546-4083-88ea-d3adc6a3ea7e" (UID: "01aaae54-a546-4083-88ea-d3adc6a3ea7e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.644567 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-tsct5"] Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.644873 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.645659 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.645740 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="extract-utilities" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.645755 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="extract-utilities" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.645770 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.645784 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.645869 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="extract-content" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.645879 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="extract-content" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.646430 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" containerName="registry-server" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.646458 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerName="oauth-openshift" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.649488 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.667316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-tsct5"] Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.700265 4727 generic.go:334] "Generic (PLEG): container finished" podID="847f9d70-de5c-4bc0-9823-c4074e353565" containerID="0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607" exitCode=0 Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.700367 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerDied","Data":"0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerDied","Data":"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704468 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdwnw" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704501 4727 scope.go:117] "RemoveContainer" containerID="a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704362 4727 generic.go:334] "Generic (PLEG): container finished" podID="db9e6995-13ec-46a4-a659-0acc617449d3" containerID="a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" exitCode=0 Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.704800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdwnw" event={"ID":"db9e6995-13ec-46a4-a659-0acc617449d3","Type":"ContainerDied","Data":"5911bf93f874e3a7b6ad929da2270a83dc3e813d601331738a79ef5a79ff102e"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.710685 4727 generic.go:334] "Generic (PLEG): container finished" podID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" containerID="3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" exitCode=0 Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.710725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" event={"ID":"01aaae54-a546-4083-88ea-d3adc6a3ea7e","Type":"ContainerDied","Data":"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.710748 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" event={"ID":"01aaae54-a546-4083-88ea-d3adc6a3ea7e","Type":"ContainerDied","Data":"887701e00f73eb4322aa6d1e2bd519ba9d9e95d1edd0663c388315ca72c944aa"} Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.710824 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ldkw8" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722303 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") pod \"db9e6995-13ec-46a4-a659-0acc617449d3\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722348 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") pod \"db9e6995-13ec-46a4-a659-0acc617449d3\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722390 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") pod \"db9e6995-13ec-46a4-a659-0acc617449d3\" (UID: \"db9e6995-13ec-46a4-a659-0acc617449d3\") " Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722569 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722593 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722621 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722638 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-policies\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722677 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.722695 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723527 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpznp\" (UniqueName: \"kubernetes.io/projected/1140a4e4-44b9-4d5f-8232-cea144e8e050-kube-api-access-jpznp\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723586 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-dir\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723798 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities" (OuterVolumeSpecName: "utilities") pod "db9e6995-13ec-46a4-a659-0acc617449d3" (UID: "db9e6995-13ec-46a4-a659-0acc617449d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.723944 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724028 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724098 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724184 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724221 4727 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724246 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phph9\" (UniqueName: \"kubernetes.io/projected/01aaae54-a546-4083-88ea-d3adc6a3ea7e-kube-api-access-phph9\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724259 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724283 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724296 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724552 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724569 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724581 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724595 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724609 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724622 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724635 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724647 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.724658 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01aaae54-a546-4083-88ea-d3adc6a3ea7e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.726459 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4" (OuterVolumeSpecName: "kube-api-access-lmdl4") pod "db9e6995-13ec-46a4-a659-0acc617449d3" (UID: "db9e6995-13ec-46a4-a659-0acc617449d3"). InnerVolumeSpecName "kube-api-access-lmdl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.781367 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.781641 4727 scope.go:117] "RemoveContainer" containerID="2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.787162 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ldkw8"] Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.803761 4727 scope.go:117] "RemoveContainer" containerID="4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.820895 4727 scope.go:117] "RemoveContainer" containerID="a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.821642 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43\": container with ID starting with a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43 not found: ID does not exist" containerID="a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.821698 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43"} err="failed to get container status \"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43\": rpc error: code = NotFound desc = could not find container \"a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43\": container with ID starting with a11a3c628ac158b5dac80c35f8a5bcd11d8a3dea17c46c1fbfa843a974c6bf43 not found: ID does not exist" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.821742 4727 scope.go:117] "RemoveContainer" containerID="2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.822080 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b\": container with ID starting with 2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b not found: ID does not exist" containerID="2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.822109 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b"} err="failed to get container status \"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b\": rpc error: code = NotFound desc = could not find container \"2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b\": container with ID starting with 2cc23859aee2d03c7d58dbc29b164e7076166c6e6f1ba86c79d89791b65c461b not found: ID does not exist" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.822129 4727 scope.go:117] "RemoveContainer" containerID="4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.822371 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24\": container with ID starting with 4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24 not found: ID does not exist" containerID="4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.822398 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24"} err="failed to get container status \"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24\": rpc error: code = NotFound desc = could not find container \"4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24\": container with ID starting with 4d2fa5d8e55703768d5beb4e339aa912a8d1e7d98386e2995b035115850b4f24 not found: ID does not exist" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.822415 4727 scope.go:117] "RemoveContainer" containerID="3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpznp\" (UniqueName: \"kubernetes.io/projected/1140a4e4-44b9-4d5f-8232-cea144e8e050-kube-api-access-jpznp\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-dir\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826250 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826364 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826403 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826426 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-policies\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826485 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826567 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmdl4\" (UniqueName: \"kubernetes.io/projected/db9e6995-13ec-46a4-a659-0acc617449d3-kube-api-access-lmdl4\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.826916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-dir\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.828132 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-service-ca\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.829207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.830385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-audit-policies\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.830957 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.831845 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.832159 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-error\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.832553 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.832622 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-router-certs\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.832967 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.834077 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-system-session\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.834130 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.837040 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1140a4e4-44b9-4d5f-8232-cea144e8e050-v4-0-config-user-template-login\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.850571 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpznp\" (UniqueName: \"kubernetes.io/projected/1140a4e4-44b9-4d5f-8232-cea144e8e050-kube-api-access-jpznp\") pod \"oauth-openshift-65556786d7-tsct5\" (UID: \"1140a4e4-44b9-4d5f-8232-cea144e8e050\") " pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.859179 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db9e6995-13ec-46a4-a659-0acc617449d3" (UID: "db9e6995-13ec-46a4-a659-0acc617449d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.868382 4727 scope.go:117] "RemoveContainer" containerID="3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" Jan 09 10:50:07 crc kubenswrapper[4727]: E0109 10:50:07.869991 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab\": container with ID starting with 3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab not found: ID does not exist" containerID="3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.870068 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab"} err="failed to get container status \"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab\": rpc error: code = NotFound desc = could not find container \"3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab\": container with ID starting with 3e9a4cc7b4e8738361be7dbdaa650d7d30ee3e13112408381c96c938e0ae89ab not found: ID does not exist" Jan 09 10:50:07 crc kubenswrapper[4727]: I0109 10:50:07.928966 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9e6995-13ec-46a4-a659-0acc617449d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.040664 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.044193 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qdwnw"] Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.083375 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.112837 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.234178 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") pod \"847f9d70-de5c-4bc0-9823-c4074e353565\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.234305 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") pod \"847f9d70-de5c-4bc0-9823-c4074e353565\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.234457 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") pod \"847f9d70-de5c-4bc0-9823-c4074e353565\" (UID: \"847f9d70-de5c-4bc0-9823-c4074e353565\") " Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.235784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities" (OuterVolumeSpecName: "utilities") pod "847f9d70-de5c-4bc0-9823-c4074e353565" (UID: "847f9d70-de5c-4bc0-9823-c4074e353565"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.239536 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt" (OuterVolumeSpecName: "kube-api-access-cjjlt") pod "847f9d70-de5c-4bc0-9823-c4074e353565" (UID: "847f9d70-de5c-4bc0-9823-c4074e353565"). InnerVolumeSpecName "kube-api-access-cjjlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.288460 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "847f9d70-de5c-4bc0-9823-c4074e353565" (UID: "847f9d70-de5c-4bc0-9823-c4074e353565"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.336472 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.336548 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847f9d70-de5c-4bc0-9823-c4074e353565-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.336561 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjjlt\" (UniqueName: \"kubernetes.io/projected/847f9d70-de5c-4bc0-9823-c4074e353565-kube-api-access-cjjlt\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.510700 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65556786d7-tsct5"] Jan 09 10:50:08 crc kubenswrapper[4727]: W0109 10:50:08.517247 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1140a4e4_44b9_4d5f_8232_cea144e8e050.slice/crio-29a1c5ed63493ce571188d85ee422b4f13a940696befb6a02ab66d0a36dab429 WatchSource:0}: Error finding container 29a1c5ed63493ce571188d85ee422b4f13a940696befb6a02ab66d0a36dab429: Status 404 returned error can't find the container with id 29a1c5ed63493ce571188d85ee422b4f13a940696befb6a02ab66d0a36dab429 Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.722883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" event={"ID":"1140a4e4-44b9-4d5f-8232-cea144e8e050","Type":"ContainerStarted","Data":"29a1c5ed63493ce571188d85ee422b4f13a940696befb6a02ab66d0a36dab429"} Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.726782 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlqjk" event={"ID":"847f9d70-de5c-4bc0-9823-c4074e353565","Type":"ContainerDied","Data":"4e7da0de585649169fd8cf1b1066a4fe59cfd2aac18387a51307fee26f57796c"} Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.726814 4727 scope.go:117] "RemoveContainer" containerID="0faad0fe325435bf2156ea47fbf8b9acb50f555484037528578af57ffbbd4607" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.726903 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlqjk" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.763886 4727 scope.go:117] "RemoveContainer" containerID="020d5eaa11f03b69c9e84a3c6f747b9646ac5bd4933aa199761865a7855eca7b" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.768719 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.771543 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tlqjk"] Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.788196 4727 scope.go:117] "RemoveContainer" containerID="d91d351a8c554abc2fdcaa83ba21ac1cd2528cb470f7cc7b072bc6c71cf7875d" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.869592 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01aaae54-a546-4083-88ea-d3adc6a3ea7e" path="/var/lib/kubelet/pods/01aaae54-a546-4083-88ea-d3adc6a3ea7e/volumes" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.870398 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" path="/var/lib/kubelet/pods/847f9d70-de5c-4bc0-9823-c4074e353565/volumes" Jan 09 10:50:08 crc kubenswrapper[4727]: I0109 10:50:08.871178 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db9e6995-13ec-46a4-a659-0acc617449d3" path="/var/lib/kubelet/pods/db9e6995-13ec-46a4-a659-0acc617449d3/volumes" Jan 09 10:50:09 crc kubenswrapper[4727]: I0109 10:50:09.746281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" event={"ID":"1140a4e4-44b9-4d5f-8232-cea144e8e050","Type":"ContainerStarted","Data":"0d133ba109a82b9e848e9de28714152e51f5a7591f70efee78241e61a55d3f3d"} Jan 09 10:50:09 crc kubenswrapper[4727]: I0109 10:50:09.746838 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:09 crc kubenswrapper[4727]: I0109 10:50:09.752220 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" Jan 09 10:50:09 crc kubenswrapper[4727]: I0109 10:50:09.769689 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65556786d7-tsct5" podStartSLOduration=28.769668408 podStartE2EDuration="28.769668408s" podCreationTimestamp="2026-01-09 10:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:50:09.769541654 +0000 UTC m=+255.219446465" watchObservedRunningTime="2026-01-09 10:50:09.769668408 +0000 UTC m=+255.219573189" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.166445 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.167127 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="extract-utilities" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.167145 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="extract-utilities" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.167167 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="extract-content" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.167177 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="extract-content" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.167188 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.167194 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.167331 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="847f9d70-de5c-4bc0-9823-c4074e353565" containerName="registry-server" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168390 4727 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168669 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168710 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168877 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168905 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.169001 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.169099 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.168969 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" gracePeriod=15 Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.169974 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170060 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170078 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170087 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170097 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170105 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170116 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170121 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170140 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170145 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170153 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170160 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 10:50:11 crc kubenswrapper[4727]: E0109 10:50:11.170174 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170180 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170290 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170304 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170313 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170322 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170331 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.170567 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.175450 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.303774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.303843 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.303866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.303885 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.304065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.304119 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.304204 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.304298 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406013 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406136 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406152 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406211 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406192 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406416 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.406463 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.765695 4727 generic.go:334] "Generic (PLEG): container finished" podID="8f187469-eca7-43d1-80a1-5b67f7aff838" containerID="6db409e85d88995423280632c4625000e42915184376c39b6a7a5ad209ecd5b5" exitCode=0 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.765821 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f187469-eca7-43d1-80a1-5b67f7aff838","Type":"ContainerDied","Data":"6db409e85d88995423280632c4625000e42915184376c39b6a7a5ad209ecd5b5"} Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.767077 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.770239 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.772494 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773646 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" exitCode=0 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773683 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" exitCode=0 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773694 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" exitCode=0 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773714 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" exitCode=2 Jan 09 10:50:11 crc kubenswrapper[4727]: I0109 10:50:11.773755 4727 scope.go:117] "RemoveContainer" containerID="23540789c5b29cd70223ab1a89422b73d70161900e9896571192ea8cd61ddb2c" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.354424 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:27da5043f12d5307a70c72f97a3fa66058dee448a5dec7cd83b0aa63f5496935\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f05e1dfe1f6582ffaf0843b908ef08d6fd1a032539e2d8ce20fd84ee0c4ec783\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1665092989},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:9c98ee6f2d9b7993896c073e43217f838b4429acd29804b046840e375a35a8ec\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc6b4e2a8395d8afad2aa9b9632ecb98ce8dde7c73980fcf5b37cb5648d6b87f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203840338},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8dc6bf40bb85b3c070ac6ce1243b4d687fd575150299376d036af7b541798910\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e9bfaeae78e144645263e94c4eec4e342eeddbe95edd9b8e0ef6c87b7a507ba6\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201485666},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6b3b97e17390b5ee568393f2501a5fc412865074b8f6c5355ea48ab7c3983b7a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:8bb7ea6c489e90cb357c7f50fe8266a6a6c6e23e4931a5eaa0fd33a409db20e8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1175127379},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.355767 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.356362 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.357048 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.357263 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:12 crc kubenswrapper[4727]: E0109 10:50:12.357288 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:50:12 crc kubenswrapper[4727]: I0109 10:50:12.782009 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.163333 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.164446 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.234710 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") pod \"8f187469-eca7-43d1-80a1-5b67f7aff838\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.234779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") pod \"8f187469-eca7-43d1-80a1-5b67f7aff838\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f187469-eca7-43d1-80a1-5b67f7aff838" (UID: "8f187469-eca7-43d1-80a1-5b67f7aff838"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235357 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") pod \"8f187469-eca7-43d1-80a1-5b67f7aff838\" (UID: \"8f187469-eca7-43d1-80a1-5b67f7aff838\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235482 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock" (OuterVolumeSpecName: "var-lock") pod "8f187469-eca7-43d1-80a1-5b67f7aff838" (UID: "8f187469-eca7-43d1-80a1-5b67f7aff838"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235871 4727 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-var-lock\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.235889 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f187469-eca7-43d1-80a1-5b67f7aff838-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.266784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f187469-eca7-43d1-80a1-5b67f7aff838" (UID: "8f187469-eca7-43d1-80a1-5b67f7aff838"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.337395 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f187469-eca7-43d1-80a1-5b67f7aff838-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.798522 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f187469-eca7-43d1-80a1-5b67f7aff838","Type":"ContainerDied","Data":"ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4"} Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.798588 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce40672249454e87539bbad057e826143ab1f941c45db10716f5f496ae423fb4" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.798649 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.861425 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.865064 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.865865 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.866392 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.866645 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.944849 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.944923 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945028 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945079 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945054 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945205 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945647 4727 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945671 4727 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:13 crc kubenswrapper[4727]: I0109 10:50:13.945682 4727 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.808151 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.809393 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" exitCode=0 Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.809470 4727 scope.go:117] "RemoveContainer" containerID="b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.809534 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.827919 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.828160 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.838842 4727 scope.go:117] "RemoveContainer" containerID="b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.855212 4727 scope.go:117] "RemoveContainer" containerID="e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.865730 4727 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.866195 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.867719 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.873560 4727 scope.go:117] "RemoveContainer" containerID="6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.888684 4727 scope.go:117] "RemoveContainer" containerID="f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.907041 4727 scope.go:117] "RemoveContainer" containerID="3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.926372 4727 scope.go:117] "RemoveContainer" containerID="b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.927044 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\": container with ID starting with b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d not found: ID does not exist" containerID="b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.927104 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d"} err="failed to get container status \"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\": rpc error: code = NotFound desc = could not find container \"b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d\": container with ID starting with b2ac9dfa600beaa685550b9bffe1112273997ac199c2b8f78fd45c7e699a454d not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.927143 4727 scope.go:117] "RemoveContainer" containerID="b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.927499 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\": container with ID starting with b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c not found: ID does not exist" containerID="b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.927548 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c"} err="failed to get container status \"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\": rpc error: code = NotFound desc = could not find container \"b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c\": container with ID starting with b3044d8f5eb5b8fed2683a120d0ac94920d871ce97fb636e30cb1e81d49e083c not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.927571 4727 scope.go:117] "RemoveContainer" containerID="e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.929117 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\": container with ID starting with e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3 not found: ID does not exist" containerID="e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.929156 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3"} err="failed to get container status \"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\": rpc error: code = NotFound desc = could not find container \"e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3\": container with ID starting with e71c7d8ef24124fd09da155aa8d3ad220d444a23fa5e734ed9967fe6beccf3e3 not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.929189 4727 scope.go:117] "RemoveContainer" containerID="6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.929520 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\": container with ID starting with 6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7 not found: ID does not exist" containerID="6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.929544 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7"} err="failed to get container status \"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\": rpc error: code = NotFound desc = could not find container \"6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7\": container with ID starting with 6a87ffae1fcf2b35bf6beb8808788a45be86e1cd67a4c6c1d6865ac795facef7 not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.929557 4727 scope.go:117] "RemoveContainer" containerID="f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.930003 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\": container with ID starting with f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664 not found: ID does not exist" containerID="f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.930081 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664"} err="failed to get container status \"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\": rpc error: code = NotFound desc = could not find container \"f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664\": container with ID starting with f0e0129c882f65581d386e7ffebe4e3001b5d5850784896a7f7fe9be52b2c664 not found: ID does not exist" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.930140 4727 scope.go:117] "RemoveContainer" containerID="3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03" Jan 09 10:50:14 crc kubenswrapper[4727]: E0109 10:50:14.930815 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\": container with ID starting with 3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03 not found: ID does not exist" containerID="3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03" Jan 09 10:50:14 crc kubenswrapper[4727]: I0109 10:50:14.930845 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03"} err="failed to get container status \"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\": rpc error: code = NotFound desc = could not find container \"3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03\": container with ID starting with 3716487995e5b4c67538f2d746ea713945346fece5a9a55872430eb8cc6dfe03 not found: ID does not exist" Jan 09 10:50:16 crc kubenswrapper[4727]: E0109 10:50:16.200202 4727 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:16 crc kubenswrapper[4727]: I0109 10:50:16.200683 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:16 crc kubenswrapper[4727]: E0109 10:50:16.227086 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18890a72a4bdcc92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-09 10:50:16.226081938 +0000 UTC m=+261.675986739,LastTimestamp:2026-01-09 10:50:16.226081938 +0000 UTC m=+261.675986739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 09 10:50:16 crc kubenswrapper[4727]: I0109 10:50:16.827247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383"} Jan 09 10:50:16 crc kubenswrapper[4727]: I0109 10:50:16.827915 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2a5c0d679da554c0b7a98b17eb420d17afe78530fd47a2e25f888e3a9b7ac285"} Jan 09 10:50:16 crc kubenswrapper[4727]: E0109 10:50:16.828599 4727 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:16 crc kubenswrapper[4727]: I0109 10:50:16.828706 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.613263 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.614201 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.614667 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.615044 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.615417 4727 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:20 crc kubenswrapper[4727]: I0109 10:50:20.615448 4727 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.615708 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Jan 09 10:50:20 crc kubenswrapper[4727]: E0109 10:50:20.817335 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Jan 09 10:50:21 crc kubenswrapper[4727]: E0109 10:50:21.219125 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.859615 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.860703 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.877756 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.877804 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:21 crc kubenswrapper[4727]: E0109 10:50:21.878366 4727 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:21 crc kubenswrapper[4727]: I0109 10:50:21.878963 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.019932 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.395179 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:27da5043f12d5307a70c72f97a3fa66058dee448a5dec7cd83b0aa63f5496935\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f05e1dfe1f6582ffaf0843b908ef08d6fd1a032539e2d8ce20fd84ee0c4ec783\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1665092989},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:9c98ee6f2d9b7993896c073e43217f838b4429acd29804b046840e375a35a8ec\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc6b4e2a8395d8afad2aa9b9632ecb98ce8dde7c73980fcf5b37cb5648d6b87f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203840338},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8dc6bf40bb85b3c070ac6ce1243b4d687fd575150299376d036af7b541798910\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:e9bfaeae78e144645263e94c4eec4e342eeddbe95edd9b8e0ef6c87b7a507ba6\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201485666},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6b3b97e17390b5ee568393f2501a5fc412865074b8f6c5355ea48ab7c3983b7a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:8bb7ea6c489e90cb357c7f50fe8266a6a6c6e23e4931a5eaa0fd33a409db20e8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1175127379},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.396245 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.396721 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.396983 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.397971 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.398003 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.864013 4727 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3f0c7e8ca1be94ec648026709cfd7cddfbf7fedf1aa07cd7155b4f1cc8b4a36c" exitCode=0 Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.871239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3f0c7e8ca1be94ec648026709cfd7cddfbf7fedf1aa07cd7155b4f1cc8b4a36c"} Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.871308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df82453108310ee5491d99c3ab8519fa8f143bc7fad3eba550937861443d094a"} Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.871694 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.871723 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:22 crc kubenswrapper[4727]: I0109 10:50:22.872308 4727 status_manager.go:851] "Failed to get status for pod" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 09 10:50:22 crc kubenswrapper[4727]: E0109 10:50:22.872308 4727 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:23 crc kubenswrapper[4727]: I0109 10:50:23.874910 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"24f1d805a60138ebcc997636f2d59d6b5125ce0d642d83120ec78da78a118c44"} Jan 09 10:50:23 crc kubenswrapper[4727]: I0109 10:50:23.875409 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31c3d97a883790d3118598609d3d1c80e24721635b74f87a0aaf3c2799b56eec"} Jan 09 10:50:23 crc kubenswrapper[4727]: I0109 10:50:23.875424 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c8497b91934e11eb2647c7aaedd35d8a58acf169c6beab01156c9c3f25639c5b"} Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.884037 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.885184 4727 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac" exitCode=1 Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.885282 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac"} Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.886144 4727 scope.go:117] "RemoveContainer" containerID="54ef1162c5c4b0cbddd435975e24fb5872db94dc2f88fdc9f25b3be873a746ac" Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.889831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"238acc9ca1251829789deed01dc932471fa121ee7097117f9c9e519b9afd2a4f"} Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.889890 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4e764b3186d3997459148ecbecee09819c81b8771a000f00f9e8da5a490c4a31"} Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.890215 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.890402 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:24 crc kubenswrapper[4727]: I0109 10:50:24.890593 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:25 crc kubenswrapper[4727]: I0109 10:50:25.900822 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 09 10:50:25 crc kubenswrapper[4727]: I0109 10:50:25.901297 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b53e2517cb4e6619a42c59d7c74c55875f0794ee7a31605dc0f00bc81c72688e"} Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.422289 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.720115 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.724479 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.879841 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.879903 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:26 crc kubenswrapper[4727]: I0109 10:50:26.885649 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:29 crc kubenswrapper[4727]: I0109 10:50:29.906409 4727 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:29 crc kubenswrapper[4727]: I0109 10:50:29.908163 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58a99004-d8a8-486e-9785-e6c2b548cc76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:50:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-09T10:50:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f0c7e8ca1be94ec648026709cfd7cddfbf7fedf1aa07cd7155b4f1cc8b4a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f0c7e8ca1be94ec648026709cfd7cddfbf7fedf1aa07cd7155b4f1cc8b4a36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-09T10:50:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-09T10:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Jan 09 10:50:29 crc kubenswrapper[4727]: I0109 10:50:29.948367 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8209f59-f410-436e-867f-fed2cbaa44c1" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.930894 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.930942 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.937058 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8209f59-f410-436e-867f-fed2cbaa44c1" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.944047 4727 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://c8497b91934e11eb2647c7aaedd35d8a58acf169c6beab01156c9c3f25639c5b" Jan 09 10:50:30 crc kubenswrapper[4727]: I0109 10:50:30.944095 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:31 crc kubenswrapper[4727]: I0109 10:50:31.935323 4727 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:31 crc kubenswrapper[4727]: I0109 10:50:31.935370 4727 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="58a99004-d8a8-486e-9785-e6c2b548cc76" Jan 09 10:50:31 crc kubenswrapper[4727]: I0109 10:50:31.940435 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d8209f59-f410-436e-867f-fed2cbaa44c1" Jan 09 10:50:36 crc kubenswrapper[4727]: I0109 10:50:36.062028 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 09 10:50:36 crc kubenswrapper[4727]: I0109 10:50:36.426710 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.076355 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.495879 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.543888 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.552751 4727 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.841341 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 09 10:50:37 crc kubenswrapper[4727]: I0109 10:50:37.950161 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 09 10:50:38 crc kubenswrapper[4727]: I0109 10:50:38.120309 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 09 10:50:38 crc kubenswrapper[4727]: I0109 10:50:38.708246 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 09 10:50:39 crc kubenswrapper[4727]: I0109 10:50:39.137444 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 10:50:39 crc kubenswrapper[4727]: I0109 10:50:39.211496 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 09 10:50:40 crc kubenswrapper[4727]: I0109 10:50:40.623216 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 09 10:50:40 crc kubenswrapper[4727]: I0109 10:50:40.720168 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 09 10:50:40 crc kubenswrapper[4727]: I0109 10:50:40.872083 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.062036 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.130326 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.145964 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.237703 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.378772 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.418229 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.620445 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 09 10:50:41 crc kubenswrapper[4727]: I0109 10:50:41.882217 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.044534 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.069597 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.070386 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.180050 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.238680 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.284468 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.316691 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.317251 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.515440 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.781207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 09 10:50:42 crc kubenswrapper[4727]: I0109 10:50:42.958306 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.189171 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.281814 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.369896 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.450266 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.774692 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.932922 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 09 10:50:43 crc kubenswrapper[4727]: I0109 10:50:43.995860 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.107432 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.208062 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.486965 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.577854 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.636142 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.636651 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.649631 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.687256 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.710608 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.737754 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.784221 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.837092 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.852180 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.937335 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 09 10:50:44 crc kubenswrapper[4727]: I0109 10:50:44.987801 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.022817 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.195945 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.286714 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.557131 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.593302 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.617084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.624183 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.667937 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.700597 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.968473 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.978469 4727 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.988722 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.988805 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 09 10:50:45 crc kubenswrapper[4727]: I0109 10:50:45.993671 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.003993 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.010900 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.010877394 podStartE2EDuration="17.010877394s" podCreationTimestamp="2026-01-09 10:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:50:46.006662383 +0000 UTC m=+291.456567164" watchObservedRunningTime="2026-01-09 10:50:46.010877394 +0000 UTC m=+291.460782175" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.040161 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.080323 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.116825 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.248908 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.250646 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.313858 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.350175 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.483370 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.483460 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.495411 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.678235 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 09 10:50:46 crc kubenswrapper[4727]: I0109 10:50:46.831046 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.043064 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.100808 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.150407 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.184540 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.212321 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.329328 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.393026 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.422893 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.442668 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.555728 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.620838 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.680573 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 09 10:50:47 crc kubenswrapper[4727]: I0109 10:50:47.829621 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.065648 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.090287 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.211918 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.232101 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.328890 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.406144 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.470095 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.503587 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.690886 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.782848 4727 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.859763 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.859764 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 09 10:50:48 crc kubenswrapper[4727]: I0109 10:50:48.907799 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.007725 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.012409 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.022827 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.048212 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.100350 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.183828 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.187766 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.199124 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.255181 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.442650 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.646431 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.707404 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.713410 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.767044 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.787628 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.833045 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.857780 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.877912 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.891218 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.961571 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 09 10:50:49 crc kubenswrapper[4727]: I0109 10:50:49.993098 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.003274 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.052757 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.069981 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.220303 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.272575 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.353867 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.354878 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.406927 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.408791 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.437540 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.541449 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.565613 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.696801 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 09 10:50:50 crc kubenswrapper[4727]: I0109 10:50:50.987030 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.016351 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.422284 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.430089 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.444394 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.508205 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.530827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.548441 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.646000 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.650913 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.678654 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.694644 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.784028 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.861661 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.869145 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.875622 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 09 10:50:51 crc kubenswrapper[4727]: I0109 10:50:51.878636 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.179541 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.202562 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.308084 4727 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.308793 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" gracePeriod=5 Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.320317 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.356384 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.364558 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.490497 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.497307 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.518765 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.531318 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.580354 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.593903 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.614407 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.620198 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.639367 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.713893 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.741415 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.770025 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.845719 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.849277 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.883079 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 09 10:50:52 crc kubenswrapper[4727]: I0109 10:50:52.992030 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.038672 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.068580 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.203869 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.288646 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.288886 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.312207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.384740 4727 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.386174 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.390247 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.441315 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.442830 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.454345 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.594034 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.596222 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.640694 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.662675 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.678227 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.679809 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.700840 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.828072 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.835534 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.908449 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 09 10:50:53 crc kubenswrapper[4727]: I0109 10:50:53.988138 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.090940 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.123413 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.127043 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.215269 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.262967 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.326614 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.528320 4727 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.590331 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.682790 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.683802 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.719373 4727 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.721894 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.723100 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.851086 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.888115 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.965826 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 09 10:50:54 crc kubenswrapper[4727]: I0109 10:50:54.978194 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.010975 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.051650 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.227032 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.289439 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.408860 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.483478 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.492084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.501661 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.516278 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.580614 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.680754 4727 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.723904 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.777262 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.777784 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.823186 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.896876 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 09 10:50:55 crc kubenswrapper[4727]: I0109 10:50:55.903685 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.131866 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.160301 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.209847 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.429778 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.585453 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.590373 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.629685 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.665583 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 10:50:56 crc kubenswrapper[4727]: I0109 10:50:56.698786 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.198176 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.485645 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.545269 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.656429 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.723848 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.744052 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.890008 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 09 10:50:57 crc kubenswrapper[4727]: I0109 10:50:57.890112 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.031081 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034757 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034816 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034855 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034916 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034903 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034939 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.034944 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035074 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035208 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035478 4727 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035537 4727 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035551 4727 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.035562 4727 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.043204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.103943 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.104198 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.104363 4727 scope.go:117] "RemoveContainer" containerID="344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.104030 4727 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" exitCode=137 Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.133665 4727 scope.go:117] "RemoveContainer" containerID="344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" Jan 09 10:50:58 crc kubenswrapper[4727]: E0109 10:50:58.135877 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383\": container with ID starting with 344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383 not found: ID does not exist" containerID="344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.135913 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383"} err="failed to get container status \"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383\": rpc error: code = NotFound desc = could not find container \"344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383\": container with ID starting with 344d1f47396db4e64a45750de44ffd4baa14d2dea26b24503d57aff3d5ca0383 not found: ID does not exist" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.136486 4727 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.411434 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.567104 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.868970 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.928231 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.928596 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerName="route-controller-manager" containerID="cri-o://6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9" gracePeriod=30 Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.937020 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:50:58 crc kubenswrapper[4727]: I0109 10:50:58.937823 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerName="controller-manager" containerID="cri-o://ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0" gracePeriod=30 Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.111128 4727 generic.go:334] "Generic (PLEG): container finished" podID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerID="ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0" exitCode=0 Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.111704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" event={"ID":"47b307d6-5374-4c43-af7a-57c97019e1a4","Type":"ContainerDied","Data":"ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0"} Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.113414 4727 generic.go:334] "Generic (PLEG): container finished" podID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerID="6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9" exitCode=0 Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.113476 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" event={"ID":"69a81417-459b-4cd9-9be8-d04ac04682e3","Type":"ContainerDied","Data":"6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9"} Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.376705 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.386642 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.559101 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.560700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") pod \"69a81417-459b-4cd9-9be8-d04ac04682e3\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.560862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") pod \"69a81417-459b-4cd9-9be8-d04ac04682e3\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561153 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561252 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") pod \"69a81417-459b-4cd9-9be8-d04ac04682e3\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561391 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561519 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") pod \"47b307d6-5374-4c43-af7a-57c97019e1a4\" (UID: \"47b307d6-5374-4c43-af7a-57c97019e1a4\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") pod \"69a81417-459b-4cd9-9be8-d04ac04682e3\" (UID: \"69a81417-459b-4cd9-9be8-d04ac04682e3\") " Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561834 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.561985 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca" (OuterVolumeSpecName: "client-ca") pod "69a81417-459b-4cd9-9be8-d04ac04682e3" (UID: "69a81417-459b-4cd9-9be8-d04ac04682e3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562105 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config" (OuterVolumeSpecName: "config") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562748 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562843 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562931 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.563019 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b307d6-5374-4c43-af7a-57c97019e1a4-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.562878 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config" (OuterVolumeSpecName: "config") pod "69a81417-459b-4cd9-9be8-d04ac04682e3" (UID: "69a81417-459b-4cd9-9be8-d04ac04682e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.567094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv" (OuterVolumeSpecName: "kube-api-access-xjswv") pod "69a81417-459b-4cd9-9be8-d04ac04682e3" (UID: "69a81417-459b-4cd9-9be8-d04ac04682e3"). InnerVolumeSpecName "kube-api-access-xjswv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.570804 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "69a81417-459b-4cd9-9be8-d04ac04682e3" (UID: "69a81417-459b-4cd9-9be8-d04ac04682e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.570916 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29" (OuterVolumeSpecName: "kube-api-access-z7m29") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "kube-api-access-z7m29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.571265 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47b307d6-5374-4c43-af7a-57c97019e1a4" (UID: "47b307d6-5374-4c43-af7a-57c97019e1a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664686 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69a81417-459b-4cd9-9be8-d04ac04682e3-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664733 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b307d6-5374-4c43-af7a-57c97019e1a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664748 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjswv\" (UniqueName: \"kubernetes.io/projected/69a81417-459b-4cd9-9be8-d04ac04682e3-kube-api-access-xjswv\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664762 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69a81417-459b-4cd9-9be8-d04ac04682e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:50:59 crc kubenswrapper[4727]: I0109 10:50:59.664771 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7m29\" (UniqueName: \"kubernetes.io/projected/47b307d6-5374-4c43-af7a-57c97019e1a4-kube-api-access-z7m29\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.122995 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" event={"ID":"69a81417-459b-4cd9-9be8-d04ac04682e3","Type":"ContainerDied","Data":"66078ce65832ec61a5bf242b8822fe7a23913bf0144ac7798b42e7483cab3f72"} Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.123038 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.123099 4727 scope.go:117] "RemoveContainer" containerID="6a1740dc4d1179f34a8c3291c2123b0fcc96f371a550e7677730bbc6814ebea9" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.125538 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" event={"ID":"47b307d6-5374-4c43-af7a-57c97019e1a4","Type":"ContainerDied","Data":"bb2f20dd9c688c9d9ca339c2135912218e93eba35cb1a8cb66863cd0423ab406"} Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.125629 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.142043 4727 scope.go:117] "RemoveContainer" containerID="ae9c474864394b31e7d70fc36e54da43f16f765429b4f6048886e037b199d7d0" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.156871 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.172287 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.172545 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b8fd498d-tp6j4"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.176830 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.181432 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cc9fbd87d-grnvl"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751138 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-595b8f5f7c-24mq6"] Jan 09 10:51:00 crc kubenswrapper[4727]: E0109 10:51:00.751480 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751496 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 10:51:00 crc kubenswrapper[4727]: E0109 10:51:00.751528 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" containerName="installer" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751535 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" containerName="installer" Jan 09 10:51:00 crc kubenswrapper[4727]: E0109 10:51:00.751545 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerName="controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751552 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerName="controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: E0109 10:51:00.751572 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerName="route-controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751577 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerName="route-controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751681 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f187469-eca7-43d1-80a1-5b67f7aff838" containerName="installer" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751693 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" containerName="controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751705 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.751714 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" containerName="route-controller-manager" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.752360 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.754563 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.755660 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.755918 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.756212 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.756817 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.758340 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.759319 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.762166 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.762298 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.763648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-595b8f5f7c-24mq6"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.769138 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.769847 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.770773 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.770964 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.771049 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.772170 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.783424 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.866957 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b307d6-5374-4c43-af7a-57c97019e1a4" path="/var/lib/kubelet/pods/47b307d6-5374-4c43-af7a-57c97019e1a4/volumes" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.867967 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a81417-459b-4cd9-9be8-d04ac04682e3" path="/var/lib/kubelet/pods/69a81417-459b-4cd9-9be8-d04ac04682e3/volumes" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878560 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-client-ca\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878632 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-proxy-ca-bundles\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878676 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-serving-cert\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878706 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvh7x\" (UniqueName: \"kubernetes.io/projected/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-kube-api-access-nvh7x\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878744 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-config\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878773 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878803 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878830 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.878856 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.980883 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-serving-cert\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.980964 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvh7x\" (UniqueName: \"kubernetes.io/projected/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-kube-api-access-nvh7x\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981025 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-config\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981079 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981121 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981144 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981232 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-client-ca\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.981289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-proxy-ca-bundles\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.985111 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-client-ca\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.985284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-proxy-ca-bundles\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.985649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.985745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-config\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.987608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.990395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:00 crc kubenswrapper[4727]: I0109 10:51:00.991450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-serving-cert\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.005996 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") pod \"route-controller-manager-86d887979c-r88nb\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.006113 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvh7x\" (UniqueName: \"kubernetes.io/projected/bc6552dd-8901-46c7-afba-4a46dd4ee5fd-kube-api-access-nvh7x\") pod \"controller-manager-595b8f5f7c-24mq6\" (UID: \"bc6552dd-8901-46c7-afba-4a46dd4ee5fd\") " pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.088705 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.101341 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.561521 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:01 crc kubenswrapper[4727]: I0109 10:51:01.566352 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-595b8f5f7c-24mq6"] Jan 09 10:51:01 crc kubenswrapper[4727]: W0109 10:51:01.577129 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc6552dd_8901_46c7_afba_4a46dd4ee5fd.slice/crio-542a5e9e6570c4d5a983335a4a90129398c20c5ab75e893cb843056f4ae511d5 WatchSource:0}: Error finding container 542a5e9e6570c4d5a983335a4a90129398c20c5ab75e893cb843056f4ae511d5: Status 404 returned error can't find the container with id 542a5e9e6570c4d5a983335a4a90129398c20c5ab75e893cb843056f4ae511d5 Jan 09 10:51:02 crc kubenswrapper[4727]: I0109 10:51:02.142202 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" event={"ID":"d015075e-a19d-4f8e-b2fd-b303f8c3b230","Type":"ContainerStarted","Data":"dfa5f71f305a4ffb2e2fc7b0bcee503ce3fe986d9840097185d2065a70651d33"} Jan 09 10:51:02 crc kubenswrapper[4727]: I0109 10:51:02.143416 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" event={"ID":"bc6552dd-8901-46c7-afba-4a46dd4ee5fd","Type":"ContainerStarted","Data":"542a5e9e6570c4d5a983335a4a90129398c20c5ab75e893cb843056f4ae511d5"} Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.151171 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" event={"ID":"bc6552dd-8901-46c7-afba-4a46dd4ee5fd","Type":"ContainerStarted","Data":"d79f5347ba2719e69b0754febea43c1bfd1db4372db5dad46cd1d02d888d6133"} Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.153074 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.153109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" event={"ID":"d015075e-a19d-4f8e-b2fd-b303f8c3b230","Type":"ContainerStarted","Data":"c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea"} Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.153292 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.158863 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.158997 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.182100 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-595b8f5f7c-24mq6" podStartSLOduration=5.182075006 podStartE2EDuration="5.182075006s" podCreationTimestamp="2026-01-09 10:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:51:03.177226115 +0000 UTC m=+308.627130896" watchObservedRunningTime="2026-01-09 10:51:03.182075006 +0000 UTC m=+308.631979777" Jan 09 10:51:03 crc kubenswrapper[4727]: I0109 10:51:03.215981 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" podStartSLOduration=5.215958091 podStartE2EDuration="5.215958091s" podCreationTimestamp="2026-01-09 10:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:51:03.21592981 +0000 UTC m=+308.665834601" watchObservedRunningTime="2026-01-09 10:51:03.215958091 +0000 UTC m=+308.665862872" Jan 09 10:51:18 crc kubenswrapper[4727]: I0109 10:51:18.916786 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:18 crc kubenswrapper[4727]: I0109 10:51:18.919221 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerName="route-controller-manager" containerID="cri-o://c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea" gracePeriod=30 Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.302934 4727 generic.go:334] "Generic (PLEG): container finished" podID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerID="c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea" exitCode=0 Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.303357 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" event={"ID":"d015075e-a19d-4f8e-b2fd-b303f8c3b230","Type":"ContainerDied","Data":"c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea"} Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.416706 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.590842 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") pod \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.590901 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") pod \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.591036 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") pod \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.591928 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") pod \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\" (UID: \"d015075e-a19d-4f8e-b2fd-b303f8c3b230\") " Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.591942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config" (OuterVolumeSpecName: "config") pod "d015075e-a19d-4f8e-b2fd-b303f8c3b230" (UID: "d015075e-a19d-4f8e-b2fd-b303f8c3b230"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.592193 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca" (OuterVolumeSpecName: "client-ca") pod "d015075e-a19d-4f8e-b2fd-b303f8c3b230" (UID: "d015075e-a19d-4f8e-b2fd-b303f8c3b230"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.592697 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.592728 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d015075e-a19d-4f8e-b2fd-b303f8c3b230-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.602015 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d015075e-a19d-4f8e-b2fd-b303f8c3b230" (UID: "d015075e-a19d-4f8e-b2fd-b303f8c3b230"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.602704 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8" (OuterVolumeSpecName: "kube-api-access-qvbr8") pod "d015075e-a19d-4f8e-b2fd-b303f8c3b230" (UID: "d015075e-a19d-4f8e-b2fd-b303f8c3b230"). InnerVolumeSpecName "kube-api-access-qvbr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.694642 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvbr8\" (UniqueName: \"kubernetes.io/projected/d015075e-a19d-4f8e-b2fd-b303f8c3b230-kube-api-access-qvbr8\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:19 crc kubenswrapper[4727]: I0109 10:51:19.694700 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d015075e-a19d-4f8e-b2fd-b303f8c3b230-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.316380 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" event={"ID":"d015075e-a19d-4f8e-b2fd-b303f8c3b230","Type":"ContainerDied","Data":"dfa5f71f305a4ffb2e2fc7b0bcee503ce3fe986d9840097185d2065a70651d33"} Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.316466 4727 scope.go:117] "RemoveContainer" containerID="c4cb0529fa6a80f59cfba47d8d4f95d5882eff17f916b255c21e0585e9efccea" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.316526 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.355150 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.359567 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-r88nb"] Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.767046 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:20 crc kubenswrapper[4727]: E0109 10:51:20.767393 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerName="route-controller-manager" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.767410 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerName="route-controller-manager" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.767568 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" containerName="route-controller-manager" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.768079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.770659 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.777828 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.778582 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.778604 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.779240 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.782550 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.782929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.870250 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d015075e-a19d-4f8e-b2fd-b303f8c3b230" path="/var/lib/kubelet/pods/d015075e-a19d-4f8e-b2fd-b303f8c3b230/volumes" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.913898 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.913948 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.913981 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:20 crc kubenswrapper[4727]: I0109 10:51:20.914159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.015057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.015110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.015170 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.015212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.016742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.017051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.018855 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.033420 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") pod \"route-controller-manager-84864cfc78-rwk8j\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.084306 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:21 crc kubenswrapper[4727]: I0109 10:51:21.510237 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.329299 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" event={"ID":"db952fad-8a21-4564-819a-9c6d0f3d7ae5","Type":"ContainerStarted","Data":"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13"} Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.329688 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" event={"ID":"db952fad-8a21-4564-819a-9c6d0f3d7ae5","Type":"ContainerStarted","Data":"ec2e727083cc949091b81c436ace0d73c2ccacd9f8280230033f02c043d2f2e4"} Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.329710 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.336113 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:22 crc kubenswrapper[4727]: I0109 10:51:22.353638 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" podStartSLOduration=4.353618929 podStartE2EDuration="4.353618929s" podCreationTimestamp="2026-01-09 10:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:51:22.350144945 +0000 UTC m=+327.800049736" watchObservedRunningTime="2026-01-09 10:51:22.353618929 +0000 UTC m=+327.803523720" Jan 09 10:51:38 crc kubenswrapper[4727]: I0109 10:51:38.944228 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:38 crc kubenswrapper[4727]: I0109 10:51:38.945198 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerName="route-controller-manager" containerID="cri-o://52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" gracePeriod=30 Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.423566 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.427920 4727 generic.go:334] "Generic (PLEG): container finished" podID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerID="52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" exitCode=0 Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.427976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" event={"ID":"db952fad-8a21-4564-819a-9c6d0f3d7ae5","Type":"ContainerDied","Data":"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13"} Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.428011 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" event={"ID":"db952fad-8a21-4564-819a-9c6d0f3d7ae5","Type":"ContainerDied","Data":"ec2e727083cc949091b81c436ace0d73c2ccacd9f8280230033f02c043d2f2e4"} Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.428020 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.428033 4727 scope.go:117] "RemoveContainer" containerID="52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.453365 4727 scope.go:117] "RemoveContainer" containerID="52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" Jan 09 10:51:39 crc kubenswrapper[4727]: E0109 10:51:39.454204 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13\": container with ID starting with 52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13 not found: ID does not exist" containerID="52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.454259 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13"} err="failed to get container status \"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13\": rpc error: code = NotFound desc = could not find container \"52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13\": container with ID starting with 52e6d08484ba6c24403c58bd736fc549c9e6513e85ac4f9dfe341a18b2c84a13 not found: ID does not exist" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.502928 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") pod \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.502993 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") pod \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.503084 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") pod \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.503125 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") pod \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\" (UID: \"db952fad-8a21-4564-819a-9c6d0f3d7ae5\") " Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.504748 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca" (OuterVolumeSpecName: "client-ca") pod "db952fad-8a21-4564-819a-9c6d0f3d7ae5" (UID: "db952fad-8a21-4564-819a-9c6d0f3d7ae5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.504779 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config" (OuterVolumeSpecName: "config") pod "db952fad-8a21-4564-819a-9c6d0f3d7ae5" (UID: "db952fad-8a21-4564-819a-9c6d0f3d7ae5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.510754 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db952fad-8a21-4564-819a-9c6d0f3d7ae5" (UID: "db952fad-8a21-4564-819a-9c6d0f3d7ae5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.512477 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw" (OuterVolumeSpecName: "kube-api-access-k6wkw") pod "db952fad-8a21-4564-819a-9c6d0f3d7ae5" (UID: "db952fad-8a21-4564-819a-9c6d0f3d7ae5"). InnerVolumeSpecName "kube-api-access-k6wkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.604874 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db952fad-8a21-4564-819a-9c6d0f3d7ae5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.604920 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.604929 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db952fad-8a21-4564-819a-9c6d0f3d7ae5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.604942 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6wkw\" (UniqueName: \"kubernetes.io/projected/db952fad-8a21-4564-819a-9c6d0f3d7ae5-kube-api-access-k6wkw\") on node \"crc\" DevicePath \"\"" Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.766069 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:39 crc kubenswrapper[4727]: I0109 10:51:39.770671 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84864cfc78-rwk8j"] Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.784646 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g"] Jan 09 10:51:40 crc kubenswrapper[4727]: E0109 10:51:40.785167 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerName="route-controller-manager" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.785184 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerName="route-controller-manager" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.785313 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" containerName="route-controller-manager" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.785875 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790451 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790559 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790594 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790615 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790629 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.790783 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.798874 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g"] Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.821488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-config\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.821571 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-client-ca\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.821710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75177484-179d-4ff5-9909-6989da323db6-serving-cert\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.822057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6sb\" (UniqueName: \"kubernetes.io/projected/75177484-179d-4ff5-9909-6989da323db6-kube-api-access-mb6sb\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.867580 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db952fad-8a21-4564-819a-9c6d0f3d7ae5" path="/var/lib/kubelet/pods/db952fad-8a21-4564-819a-9c6d0f3d7ae5/volumes" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.923366 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6sb\" (UniqueName: \"kubernetes.io/projected/75177484-179d-4ff5-9909-6989da323db6-kube-api-access-mb6sb\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.923452 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-config\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.923477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-client-ca\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.923537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75177484-179d-4ff5-9909-6989da323db6-serving-cert\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.926897 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-client-ca\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.927355 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75177484-179d-4ff5-9909-6989da323db6-config\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.929587 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75177484-179d-4ff5-9909-6989da323db6-serving-cert\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:40 crc kubenswrapper[4727]: I0109 10:51:40.941928 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6sb\" (UniqueName: \"kubernetes.io/projected/75177484-179d-4ff5-9909-6989da323db6-kube-api-access-mb6sb\") pod \"route-controller-manager-86d887979c-6g62g\" (UID: \"75177484-179d-4ff5-9909-6989da323db6\") " pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:41 crc kubenswrapper[4727]: I0109 10:51:41.112047 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:41 crc kubenswrapper[4727]: I0109 10:51:41.583404 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g"] Jan 09 10:51:41 crc kubenswrapper[4727]: W0109 10:51:41.587931 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75177484_179d_4ff5_9909_6989da323db6.slice/crio-7ffc2913cc679833993696e4af7a3324711362cf7964f242c21bf00c2cf69df1 WatchSource:0}: Error finding container 7ffc2913cc679833993696e4af7a3324711362cf7964f242c21bf00c2cf69df1: Status 404 returned error can't find the container with id 7ffc2913cc679833993696e4af7a3324711362cf7964f242c21bf00c2cf69df1 Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.448587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" event={"ID":"75177484-179d-4ff5-9909-6989da323db6","Type":"ContainerStarted","Data":"89e4f6b42e10d1b55856c208c316d8bff61a3decc7dc23370c01fcb0854f89b7"} Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.449129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" event={"ID":"75177484-179d-4ff5-9909-6989da323db6","Type":"ContainerStarted","Data":"7ffc2913cc679833993696e4af7a3324711362cf7964f242c21bf00c2cf69df1"} Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.449154 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.458596 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" Jan 09 10:51:42 crc kubenswrapper[4727]: I0109 10:51:42.475333 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d887979c-6g62g" podStartSLOduration=4.475303251 podStartE2EDuration="4.475303251s" podCreationTimestamp="2026-01-09 10:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:51:42.470416889 +0000 UTC m=+347.920321670" watchObservedRunningTime="2026-01-09 10:51:42.475303251 +0000 UTC m=+347.925208032" Jan 09 10:52:09 crc kubenswrapper[4727]: I0109 10:52:09.405875 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:52:09 crc kubenswrapper[4727]: I0109 10:52:09.406294 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.540975 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjlsq"] Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.542608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.561425 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjlsq"] Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.684904 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-certificates\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-tls\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685526 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-bound-sa-token\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-trusted-ca\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685706 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685768 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.685910 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqtj\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-kube-api-access-hqqtj\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.715190 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-trusted-ca\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794373 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqtj\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-kube-api-access-hqqtj\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794449 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-certificates\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-tls\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.794537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-bound-sa-token\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.796051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-certificates\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.796102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-trusted-ca\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.796331 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.802488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.802614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-registry-tls\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.815852 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-bound-sa-token\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.819124 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqtj\" (UniqueName: \"kubernetes.io/projected/0d039e14-b430-43af-90d4-ebc9ba3bbc3c-kube-api-access-hqqtj\") pod \"image-registry-66df7c8f76-tjlsq\" (UID: \"0d039e14-b430-43af-90d4-ebc9ba3bbc3c\") " pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:13 crc kubenswrapper[4727]: I0109 10:52:13.864544 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.070099 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tjlsq"] Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.676785 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" event={"ID":"0d039e14-b430-43af-90d4-ebc9ba3bbc3c","Type":"ContainerStarted","Data":"75b91341a178854ccb5cd6309197ea7129ad47e7c925240919b4aff7c0ff816e"} Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.677354 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.677388 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" event={"ID":"0d039e14-b430-43af-90d4-ebc9ba3bbc3c","Type":"ContainerStarted","Data":"b0f14861174f3cf90a27471c4fea3d6a92c164fb3d038a62a63be92f2262c624"} Jan 09 10:52:14 crc kubenswrapper[4727]: I0109 10:52:14.703720 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" podStartSLOduration=1.703691888 podStartE2EDuration="1.703691888s" podCreationTimestamp="2026-01-09 10:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:52:14.701390276 +0000 UTC m=+380.151295057" watchObservedRunningTime="2026-01-09 10:52:14.703691888 +0000 UTC m=+380.153596689" Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.746252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.747683 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qzjvr" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="registry-server" containerID="cri-o://1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.756477 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.757155 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lj7dw" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="registry-server" containerID="cri-o://cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.779820 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.780185 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" containerID="cri-o://e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.791714 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.792013 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dtgwm" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="registry-server" containerID="cri-o://d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.810974 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.811414 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dpfxv" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" containerID="cri-o://9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" gracePeriod=30 Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.829073 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55prz"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.829941 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.850379 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55prz"] Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.924365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxgz\" (UniqueName: \"kubernetes.io/projected/82b1f92b-6077-4b4c-876a-3d732a78b2cc-kube-api-access-vjxgz\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.924923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:19 crc kubenswrapper[4727]: I0109 10:52:19.924976 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.028382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.028907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.028990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxgz\" (UniqueName: \"kubernetes.io/projected/82b1f92b-6077-4b4c-876a-3d732a78b2cc-kube-api-access-vjxgz\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.030613 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.041849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82b1f92b-6077-4b4c-876a-3d732a78b2cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.051504 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxgz\" (UniqueName: \"kubernetes.io/projected/82b1f92b-6077-4b4c-876a-3d732a78b2cc-kube-api-access-vjxgz\") pod \"marketplace-operator-79b997595-55prz\" (UID: \"82b1f92b-6077-4b4c-876a-3d732a78b2cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.227464 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.242242 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.293828 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334541 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") pod \"79d72458-cb87-481a-9697-4377383c296e\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334715 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") pod \"f7741215-a775-4b93-9062-45e620560d49\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334775 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") pod \"f7741215-a775-4b93-9062-45e620560d49\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334856 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") pod \"79d72458-cb87-481a-9697-4377383c296e\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334912 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") pod \"f7741215-a775-4b93-9062-45e620560d49\" (UID: \"f7741215-a775-4b93-9062-45e620560d49\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.334955 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") pod \"79d72458-cb87-481a-9697-4377383c296e\" (UID: \"79d72458-cb87-481a-9697-4377383c296e\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.335734 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "79d72458-cb87-481a-9697-4377383c296e" (UID: "79d72458-cb87-481a-9697-4377383c296e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.341258 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "79d72458-cb87-481a-9697-4377383c296e" (UID: "79d72458-cb87-481a-9697-4377383c296e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.341666 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l" (OuterVolumeSpecName: "kube-api-access-q4c8l") pod "79d72458-cb87-481a-9697-4377383c296e" (UID: "79d72458-cb87-481a-9697-4377383c296e"). InnerVolumeSpecName "kube-api-access-q4c8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.343123 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk" (OuterVolumeSpecName: "kube-api-access-f74xk") pod "f7741215-a775-4b93-9062-45e620560d49" (UID: "f7741215-a775-4b93-9062-45e620560d49"). InnerVolumeSpecName "kube-api-access-f74xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.358748 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities" (OuterVolumeSpecName: "utilities") pod "f7741215-a775-4b93-9062-45e620560d49" (UID: "f7741215-a775-4b93-9062-45e620560d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.378128 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.416222 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7741215-a775-4b93-9062-45e620560d49" (UID: "f7741215-a775-4b93-9062-45e620560d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.436863 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") pod \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.436979 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") pod \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") pod \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\" (UID: \"b713ecb8-60e3-40f5-b7fa-5cf818b59b99\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437446 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d72458-cb87-481a-9697-4377383c296e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437465 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d72458-cb87-481a-9697-4377383c296e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437479 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437489 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7741215-a775-4b93-9062-45e620560d49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437499 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4c8l\" (UniqueName: \"kubernetes.io/projected/79d72458-cb87-481a-9697-4377383c296e-kube-api-access-q4c8l\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.437651 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f74xk\" (UniqueName: \"kubernetes.io/projected/f7741215-a775-4b93-9062-45e620560d49-kube-api-access-f74xk\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.438699 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities" (OuterVolumeSpecName: "utilities") pod "b713ecb8-60e3-40f5-b7fa-5cf818b59b99" (UID: "b713ecb8-60e3-40f5-b7fa-5cf818b59b99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.447980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz" (OuterVolumeSpecName: "kube-api-access-w2hhz") pod "b713ecb8-60e3-40f5-b7fa-5cf818b59b99" (UID: "b713ecb8-60e3-40f5-b7fa-5cf818b59b99"). InnerVolumeSpecName "kube-api-access-w2hhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.508817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b713ecb8-60e3-40f5-b7fa-5cf818b59b99" (UID: "b713ecb8-60e3-40f5-b7fa-5cf818b59b99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.539626 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2hhz\" (UniqueName: \"kubernetes.io/projected/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-kube-api-access-w2hhz\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.539670 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.539680 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b713ecb8-60e3-40f5-b7fa-5cf818b59b99-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.703762 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.717786 4727 generic.go:334] "Generic (PLEG): container finished" podID="f7741215-a775-4b93-9062-45e620560d49" containerID="cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" exitCode=0 Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.717883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerDied","Data":"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.717918 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lj7dw" event={"ID":"f7741215-a775-4b93-9062-45e620560d49","Type":"ContainerDied","Data":"a179ea666208967ecfd43822950b057cd35581408873a5090e17c2f3344f91f0"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.717940 4727 scope.go:117] "RemoveContainer" containerID="cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.718106 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lj7dw" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.723436 4727 generic.go:334] "Generic (PLEG): container finished" podID="79d72458-cb87-481a-9697-4377383c296e" containerID="e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" exitCode=0 Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.723558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" event={"ID":"79d72458-cb87-481a-9697-4377383c296e","Type":"ContainerDied","Data":"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.723600 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" event={"ID":"79d72458-cb87-481a-9697-4377383c296e","Type":"ContainerDied","Data":"cb8511618c1168f1b695c78cda0dcd1111aea86736fe3350e8e14bc57a092c35"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.723713 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlqcc" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.730700 4727 generic.go:334] "Generic (PLEG): container finished" podID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerID="1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" exitCode=0 Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.730808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerDied","Data":"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.730865 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzjvr" event={"ID":"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365","Type":"ContainerDied","Data":"fb23bdfd131c74ca699783debec87aba4e592b8f689b5331a1ea091df7d605ad"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.730973 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzjvr" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.741271 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") pod \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.741347 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") pod \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.741409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") pod \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\" (UID: \"b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365\") " Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.746530 4727 generic.go:334] "Generic (PLEG): container finished" podID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerID="d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" exitCode=0 Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.746579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerDied","Data":"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.746610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtgwm" event={"ID":"b713ecb8-60e3-40f5-b7fa-5cf818b59b99","Type":"ContainerDied","Data":"974cefab389bdd1c50fa8159159be952f608b390b753f134588ad26e90c6144f"} Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.746690 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtgwm" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.753428 4727 scope.go:117] "RemoveContainer" containerID="53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.754075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities" (OuterVolumeSpecName: "utilities") pod "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" (UID: "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.757220 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd" (OuterVolumeSpecName: "kube-api-access-8p2pd") pod "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" (UID: "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365"). InnerVolumeSpecName "kube-api-access-8p2pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.804165 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.809815 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-55prz"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.820794 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlqcc"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.825862 4727 scope.go:117] "RemoveContainer" containerID="394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.827084 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.850900 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p2pd\" (UniqueName: \"kubernetes.io/projected/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-kube-api-access-8p2pd\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.850960 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.859925 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lj7dw"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.867099 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" (UID: "b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.880276 4727 scope.go:117] "RemoveContainer" containerID="cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" Jan 09 10:52:20 crc kubenswrapper[4727]: E0109 10:52:20.880867 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d\": container with ID starting with cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d not found: ID does not exist" containerID="cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.880932 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d"} err="failed to get container status \"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d\": rpc error: code = NotFound desc = could not find container \"cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d\": container with ID starting with cd0639499aa1e5007f95126a362389fbf9dc971e5d108869786b475abc361d2d not found: ID does not exist" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.880977 4727 scope.go:117] "RemoveContainer" containerID="53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2" Jan 09 10:52:20 crc kubenswrapper[4727]: E0109 10:52:20.882662 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2\": container with ID starting with 53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2 not found: ID does not exist" containerID="53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.883143 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2"} err="failed to get container status \"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2\": rpc error: code = NotFound desc = could not find container \"53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2\": container with ID starting with 53226f753a77e0c31a49a15ce12d077ae21c99ecc7391027fc3ec95ecb1864c2 not found: ID does not exist" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.883522 4727 scope.go:117] "RemoveContainer" containerID="394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.884384 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d72458-cb87-481a-9697-4377383c296e" path="/var/lib/kubelet/pods/79d72458-cb87-481a-9697-4377383c296e/volumes" Jan 09 10:52:20 crc kubenswrapper[4727]: E0109 10:52:20.884415 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29\": container with ID starting with 394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29 not found: ID does not exist" containerID="394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.884568 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29"} err="failed to get container status \"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29\": rpc error: code = NotFound desc = could not find container \"394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29\": container with ID starting with 394cbe4e6d67e1ec2107109218bac4e28909554c2a8786d37d667c0ca0fc0c29 not found: ID does not exist" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.884600 4727 scope.go:117] "RemoveContainer" containerID="e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.886036 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7741215-a775-4b93-9062-45e620560d49" path="/var/lib/kubelet/pods/f7741215-a775-4b93-9062-45e620560d49/volumes" Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.889818 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.889858 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtgwm"] Jan 09 10:52:20 crc kubenswrapper[4727]: I0109 10:52:20.953474 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.065900 4727 scope.go:117] "RemoveContainer" containerID="e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.067616 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00\": container with ID starting with e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00 not found: ID does not exist" containerID="e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.067654 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00"} err="failed to get container status \"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00\": rpc error: code = NotFound desc = could not find container \"e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00\": container with ID starting with e6b3a36515b1a330464876521645ae0fcb98c480553f369e334e272930d34c00 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.067708 4727 scope.go:117] "RemoveContainer" containerID="1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.101944 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.101994 4727 scope.go:117] "RemoveContainer" containerID="7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.105651 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qzjvr"] Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.126865 4727 scope.go:117] "RemoveContainer" containerID="aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.148085 4727 scope.go:117] "RemoveContainer" containerID="1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.149368 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f\": container with ID starting with 1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f not found: ID does not exist" containerID="1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.149416 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f"} err="failed to get container status \"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f\": rpc error: code = NotFound desc = could not find container \"1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f\": container with ID starting with 1e3f1320bccdca70052f2ebbda4c3b19c8e4043a9db8f876992b8a04f27da14f not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.149455 4727 scope.go:117] "RemoveContainer" containerID="7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.151906 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee\": container with ID starting with 7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee not found: ID does not exist" containerID="7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.151927 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee"} err="failed to get container status \"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee\": rpc error: code = NotFound desc = could not find container \"7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee\": container with ID starting with 7e3067cac54c4170d74f70f7075c23e513c5c015feb3acf4d919152b9df4b5ee not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.151945 4727 scope.go:117] "RemoveContainer" containerID="aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.152235 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188\": container with ID starting with aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188 not found: ID does not exist" containerID="aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.152253 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188"} err="failed to get container status \"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188\": rpc error: code = NotFound desc = could not find container \"aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188\": container with ID starting with aef2bf05a5a7870471625f40c0217c94f6559e66403f3c643cf37be643259188 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.152266 4727 scope.go:117] "RemoveContainer" containerID="d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.174887 4727 scope.go:117] "RemoveContainer" containerID="abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.191235 4727 scope.go:117] "RemoveContainer" containerID="55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.221093 4727 scope.go:117] "RemoveContainer" containerID="d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.221733 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1\": container with ID starting with d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1 not found: ID does not exist" containerID="d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.221772 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1"} err="failed to get container status \"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1\": rpc error: code = NotFound desc = could not find container \"d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1\": container with ID starting with d3a52b19d6eaffcac2807c6bd9248ecd45457d58b0c16afdffe97cfe11ef81b1 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.221826 4727 scope.go:117] "RemoveContainer" containerID="abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.222700 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568\": container with ID starting with abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568 not found: ID does not exist" containerID="abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.222723 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568"} err="failed to get container status \"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568\": rpc error: code = NotFound desc = could not find container \"abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568\": container with ID starting with abad801e47b1e3340e9f27bac260ba5e40a23a38b7604b7ebd2224f920173568 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.222762 4727 scope.go:117] "RemoveContainer" containerID="55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.223137 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729\": container with ID starting with 55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729 not found: ID does not exist" containerID="55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.223182 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729"} err="failed to get container status \"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729\": rpc error: code = NotFound desc = could not find container \"55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729\": container with ID starting with 55b9211de50c88eb518ababd582f5e04d97b1b69864f278c48ab5688b8046729 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.223550 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.258805 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") pod \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.258891 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") pod \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.258919 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") pod \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\" (UID: \"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2\") " Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.261491 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities" (OuterVolumeSpecName: "utilities") pod "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" (UID: "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.268730 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr" (OuterVolumeSpecName: "kube-api-access-vk7rr") pod "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" (UID: "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2"). InnerVolumeSpecName "kube-api-access-vk7rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.361209 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.361272 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk7rr\" (UniqueName: \"kubernetes.io/projected/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-kube-api-access-vk7rr\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.378036 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" (UID: "e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.463232 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.757585 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" event={"ID":"82b1f92b-6077-4b4c-876a-3d732a78b2cc","Type":"ContainerStarted","Data":"c4120e6e0b13a12e3c80c4f82c20a071169cc3f87d8d7559288902d5a4135b48"} Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.757840 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.757894 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" event={"ID":"82b1f92b-6077-4b4c-876a-3d732a78b2cc","Type":"ContainerStarted","Data":"435de44b53e8fad8ef60cf2f001292fc2d53d0bb8b2e47ba5b9c8335f2a7f892"} Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761162 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerID="9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" exitCode=0 Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761266 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerDied","Data":"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1"} Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761313 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpfxv" event={"ID":"e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2","Type":"ContainerDied","Data":"42a0ab7a98541544f9ab997a40a54899615fc448eb0ee3864856b67b039437eb"} Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761341 4727 scope.go:117] "RemoveContainer" containerID="9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761456 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpfxv" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.761540 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.783459 4727 scope.go:117] "RemoveContainer" containerID="f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.809346 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-55prz" podStartSLOduration=2.809317862 podStartE2EDuration="2.809317862s" podCreationTimestamp="2026-01-09 10:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:52:21.783765789 +0000 UTC m=+387.233670600" watchObservedRunningTime="2026-01-09 10:52:21.809317862 +0000 UTC m=+387.259222643" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.828997 4727 scope.go:117] "RemoveContainer" containerID="d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.829896 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.835888 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dpfxv"] Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.850297 4727 scope.go:117] "RemoveContainer" containerID="9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.850940 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1\": container with ID starting with 9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1 not found: ID does not exist" containerID="9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.850980 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1"} err="failed to get container status \"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1\": rpc error: code = NotFound desc = could not find container \"9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1\": container with ID starting with 9e2cf75c58f932ea304e55ff9551db21948c3494b57541b58f8dd3f6738ec9a1 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.851013 4727 scope.go:117] "RemoveContainer" containerID="f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.851547 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc\": container with ID starting with f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc not found: ID does not exist" containerID="f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.851602 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc"} err="failed to get container status \"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc\": rpc error: code = NotFound desc = could not find container \"f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc\": container with ID starting with f5dc744f8964aabc8a10c3020099ac7975876a0283989459b30c8a12c1fd31fc not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.851644 4727 scope.go:117] "RemoveContainer" containerID="d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.852143 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222\": container with ID starting with d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222 not found: ID does not exist" containerID="d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.852179 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222"} err="failed to get container status \"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222\": rpc error: code = NotFound desc = could not find container \"d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222\": container with ID starting with d8617fefa312c13530ae7512b015cd8877b7c5b9fc5c1205c2c933eedd943222 not found: ID does not exist" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.983731 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vc94w"] Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984532 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984550 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984569 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984578 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984588 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984596 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984606 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984613 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984623 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984630 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984641 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984647 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984655 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984662 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984671 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984678 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984686 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984693 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984700 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984707 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984720 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984726 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="extract-utilities" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984737 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984744 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" Jan 09 10:52:21 crc kubenswrapper[4727]: E0109 10:52:21.984756 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.984764 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="extract-content" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985040 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985052 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985088 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7741215-a775-4b93-9062-45e620560d49" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985101 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d72458-cb87-481a-9697-4377383c296e" containerName="marketplace-operator" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.985109 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" containerName="registry-server" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.986036 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:21 crc kubenswrapper[4727]: I0109 10:52:21.989891 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.002210 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vc94w"] Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.072886 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-utilities\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.073012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxhtz\" (UniqueName: \"kubernetes.io/projected/9334dd96-d38c-460b-a258-2bccfc2960d5-kube-api-access-nxhtz\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.073053 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-catalog-content\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.174605 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxhtz\" (UniqueName: \"kubernetes.io/projected/9334dd96-d38c-460b-a258-2bccfc2960d5-kube-api-access-nxhtz\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.174684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-catalog-content\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.174758 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-utilities\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.175490 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-utilities\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.175804 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9334dd96-d38c-460b-a258-2bccfc2960d5-catalog-content\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.196924 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxhtz\" (UniqueName: \"kubernetes.io/projected/9334dd96-d38c-460b-a258-2bccfc2960d5-kube-api-access-nxhtz\") pod \"redhat-marketplace-vc94w\" (UID: \"9334dd96-d38c-460b-a258-2bccfc2960d5\") " pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.322048 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.728938 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vc94w"] Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.772248 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vc94w" event={"ID":"9334dd96-d38c-460b-a258-2bccfc2960d5","Type":"ContainerStarted","Data":"e7cbcdd9132010adbd3a90684a6068ca43c2e53c5ce10e98fcacef1f67a85ff4"} Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.878739 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365" path="/var/lib/kubelet/pods/b4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365/volumes" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.880118 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b713ecb8-60e3-40f5-b7fa-5cf818b59b99" path="/var/lib/kubelet/pods/b713ecb8-60e3-40f5-b7fa-5cf818b59b99/volumes" Jan 09 10:52:22 crc kubenswrapper[4727]: I0109 10:52:22.880934 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2" path="/var/lib/kubelet/pods/e7e3f567-63b4-4a95-b9df-5ec10f0ec4f2/volumes" Jan 09 10:52:23 crc kubenswrapper[4727]: I0109 10:52:23.781645 4727 generic.go:334] "Generic (PLEG): container finished" podID="9334dd96-d38c-460b-a258-2bccfc2960d5" containerID="dabd27b1cda459657bbd8b387e2be5d4a0ae97b340939ce3f9eaac4a28219f78" exitCode=0 Jan 09 10:52:23 crc kubenswrapper[4727]: I0109 10:52:23.781726 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vc94w" event={"ID":"9334dd96-d38c-460b-a258-2bccfc2960d5","Type":"ContainerDied","Data":"dabd27b1cda459657bbd8b387e2be5d4a0ae97b340939ce3f9eaac4a28219f78"} Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.174315 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.180022 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.183034 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.185246 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.202947 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.203010 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.203069 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.303873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.303923 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.303969 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.304650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.304667 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.331279 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") pod \"certified-operators-962zg\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.371366 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.375586 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.378203 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.392122 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.405395 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.405537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.405723 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.498283 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.506398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.506457 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.506570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.507052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.507140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.525308 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") pod \"community-operators-9rsdw\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.705159 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.763361 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 10:52:24 crc kubenswrapper[4727]: I0109 10:52:24.788839 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerStarted","Data":"930189ee498333983e08c7ab2e58382299db3fb83cb58d6430015969c8cef074"} Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.175955 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.798332 4727 generic.go:334] "Generic (PLEG): container finished" podID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerID="45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc" exitCode=0 Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.798427 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerDied","Data":"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc"} Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.799070 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerStarted","Data":"357891722b37e84c5d6696b58f957606ce91311ffc64133377aa8cf62644c51c"} Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.801770 4727 generic.go:334] "Generic (PLEG): container finished" podID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerID="bf159a57ad831d29f382ffa97b36634879c00d9cea9b38064632f3c6da0f08f3" exitCode=0 Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.801840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerDied","Data":"bf159a57ad831d29f382ffa97b36634879c00d9cea9b38064632f3c6da0f08f3"} Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.808042 4727 generic.go:334] "Generic (PLEG): container finished" podID="9334dd96-d38c-460b-a258-2bccfc2960d5" containerID="6e31712e99875535052645daef8f13cd0833da2c8d963f1f7fb3897ca6598ed6" exitCode=0 Jan 09 10:52:25 crc kubenswrapper[4727]: I0109 10:52:25.808109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vc94w" event={"ID":"9334dd96-d38c-460b-a258-2bccfc2960d5","Type":"ContainerDied","Data":"6e31712e99875535052645daef8f13cd0833da2c8d963f1f7fb3897ca6598ed6"} Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.573256 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gdvvw"] Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.575067 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.576770 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.582153 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdvvw"] Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.748183 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg7s5\" (UniqueName: \"kubernetes.io/projected/86044c1d-9cd9-49f7-b906-011e3856e591-kube-api-access-fg7s5\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.748261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-utilities\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.748318 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-catalog-content\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.821145 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerStarted","Data":"5b01b39fbd490da0f09809ecc3d21cd8257e6278377041de1543e2204dfa1946"} Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.824268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vc94w" event={"ID":"9334dd96-d38c-460b-a258-2bccfc2960d5","Type":"ContainerStarted","Data":"72a56b8f8e7b4aa8f070f9ebf9f13419328b07f260f79ff05dcfcd2718ec1dc1"} Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.849888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-catalog-content\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.850008 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg7s5\" (UniqueName: \"kubernetes.io/projected/86044c1d-9cd9-49f7-b906-011e3856e591-kube-api-access-fg7s5\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.850035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-utilities\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.850422 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-utilities\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.850899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86044c1d-9cd9-49f7-b906-011e3856e591-catalog-content\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.873630 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg7s5\" (UniqueName: \"kubernetes.io/projected/86044c1d-9cd9-49f7-b906-011e3856e591-kube-api-access-fg7s5\") pod \"redhat-operators-gdvvw\" (UID: \"86044c1d-9cd9-49f7-b906-011e3856e591\") " pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.889274 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vc94w" podStartSLOduration=3.435071827 podStartE2EDuration="5.889253763s" podCreationTimestamp="2026-01-09 10:52:21 +0000 UTC" firstStartedPulling="2026-01-09 10:52:23.785610295 +0000 UTC m=+389.235515076" lastFinishedPulling="2026-01-09 10:52:26.239792231 +0000 UTC m=+391.689697012" observedRunningTime="2026-01-09 10:52:26.873072254 +0000 UTC m=+392.322977035" watchObservedRunningTime="2026-01-09 10:52:26.889253763 +0000 UTC m=+392.339158534" Jan 09 10:52:26 crc kubenswrapper[4727]: I0109 10:52:26.932958 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.416891 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdvvw"] Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.833743 4727 generic.go:334] "Generic (PLEG): container finished" podID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerID="c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f" exitCode=0 Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.833858 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerDied","Data":"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f"} Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.836778 4727 generic.go:334] "Generic (PLEG): container finished" podID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerID="5b01b39fbd490da0f09809ecc3d21cd8257e6278377041de1543e2204dfa1946" exitCode=0 Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.836861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerDied","Data":"5b01b39fbd490da0f09809ecc3d21cd8257e6278377041de1543e2204dfa1946"} Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.838841 4727 generic.go:334] "Generic (PLEG): container finished" podID="86044c1d-9cd9-49f7-b906-011e3856e591" containerID="b64adef4a01330eaf4950f8914c442088b90b7a65a9374c0b9cb3c76b61ac8e6" exitCode=0 Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.839583 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerDied","Data":"b64adef4a01330eaf4950f8914c442088b90b7a65a9374c0b9cb3c76b61ac8e6"} Jan 09 10:52:27 crc kubenswrapper[4727]: I0109 10:52:27.839623 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerStarted","Data":"8fd05d089069d89520a9575e3132cdb8e9cc906016887f4415e0f8747d353211"} Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.846431 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerStarted","Data":"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49"} Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.849827 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerStarted","Data":"33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd"} Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.851386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerStarted","Data":"110a6e90c9d2e6f523b48566eb8ee4d678fcb5a05bf8f3d05067a107a38f34b6"} Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.877457 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9rsdw" podStartSLOduration=2.369992656 podStartE2EDuration="4.877437818s" podCreationTimestamp="2026-01-09 10:52:24 +0000 UTC" firstStartedPulling="2026-01-09 10:52:25.802054028 +0000 UTC m=+391.251958809" lastFinishedPulling="2026-01-09 10:52:28.30949919 +0000 UTC m=+393.759403971" observedRunningTime="2026-01-09 10:52:28.87232358 +0000 UTC m=+394.322228361" watchObservedRunningTime="2026-01-09 10:52:28.877437818 +0000 UTC m=+394.327342599" Jan 09 10:52:28 crc kubenswrapper[4727]: I0109 10:52:28.899365 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-962zg" podStartSLOduration=2.316981627 podStartE2EDuration="4.899345713s" podCreationTimestamp="2026-01-09 10:52:24 +0000 UTC" firstStartedPulling="2026-01-09 10:52:25.803643161 +0000 UTC m=+391.253547962" lastFinishedPulling="2026-01-09 10:52:28.386007267 +0000 UTC m=+393.835912048" observedRunningTime="2026-01-09 10:52:28.895421707 +0000 UTC m=+394.345326488" watchObservedRunningTime="2026-01-09 10:52:28.899345713 +0000 UTC m=+394.349250494" Jan 09 10:52:29 crc kubenswrapper[4727]: I0109 10:52:29.861459 4727 generic.go:334] "Generic (PLEG): container finished" podID="86044c1d-9cd9-49f7-b906-011e3856e591" containerID="110a6e90c9d2e6f523b48566eb8ee4d678fcb5a05bf8f3d05067a107a38f34b6" exitCode=0 Jan 09 10:52:29 crc kubenswrapper[4727]: I0109 10:52:29.862236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerDied","Data":"110a6e90c9d2e6f523b48566eb8ee4d678fcb5a05bf8f3d05067a107a38f34b6"} Jan 09 10:52:31 crc kubenswrapper[4727]: I0109 10:52:31.874630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdvvw" event={"ID":"86044c1d-9cd9-49f7-b906-011e3856e591","Type":"ContainerStarted","Data":"7edded77ffe5b19e0a3f9ce3746e48b3a0700239fe057b835c136da80809e5eb"} Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.323264 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.323332 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.370821 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.390984 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gdvvw" podStartSLOduration=3.421601892 podStartE2EDuration="6.390950724s" podCreationTimestamp="2026-01-09 10:52:26 +0000 UTC" firstStartedPulling="2026-01-09 10:52:27.841718421 +0000 UTC m=+393.291623202" lastFinishedPulling="2026-01-09 10:52:30.811067253 +0000 UTC m=+396.260972034" observedRunningTime="2026-01-09 10:52:31.895983026 +0000 UTC m=+397.345887807" watchObservedRunningTime="2026-01-09 10:52:32.390950724 +0000 UTC m=+397.840855505" Jan 09 10:52:32 crc kubenswrapper[4727]: I0109 10:52:32.927664 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vc94w" Jan 09 10:52:33 crc kubenswrapper[4727]: I0109 10:52:33.879932 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tjlsq" Jan 09 10:52:33 crc kubenswrapper[4727]: I0109 10:52:33.959972 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.499602 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.500112 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.546675 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.706783 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.706874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.747437 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.932748 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-962zg" Jan 09 10:52:34 crc kubenswrapper[4727]: I0109 10:52:34.941397 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 10:52:36 crc kubenswrapper[4727]: I0109 10:52:36.934145 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:36 crc kubenswrapper[4727]: I0109 10:52:36.934568 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:37 crc kubenswrapper[4727]: I0109 10:52:37.984426 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gdvvw" podUID="86044c1d-9cd9-49f7-b906-011e3856e591" containerName="registry-server" probeResult="failure" output=< Jan 09 10:52:37 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:52:37 crc kubenswrapper[4727]: > Jan 09 10:52:39 crc kubenswrapper[4727]: I0109 10:52:39.406328 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:52:39 crc kubenswrapper[4727]: I0109 10:52:39.406405 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:52:46 crc kubenswrapper[4727]: I0109 10:52:46.990307 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:47 crc kubenswrapper[4727]: I0109 10:52:47.046285 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gdvvw" Jan 09 10:52:51 crc kubenswrapper[4727]: I0109 10:52:51.056375 4727 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podb4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podb4cf56cb-1bd2-4ba2-84d4-8ad0b7fdd365] : Timed out while waiting for systemd to remove kubepods-burstable-podb4cf56cb_1bd2_4ba2_84d4_8ad0b7fdd365.slice" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.010123 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerName="registry" containerID="cri-o://fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" gracePeriod=30 Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.439973 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.567971 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568036 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568084 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568135 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568456 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.568532 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") pod \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\" (UID: \"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5\") " Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.569674 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.569902 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.580068 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.580072 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.580570 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq" (OuterVolumeSpecName: "kube-api-access-6f5nq") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "kube-api-access-6f5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.589377 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.595279 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.596180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" (UID: "cc8e38f0-1786-4ad3-8efc-9c04a70ceec5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670437 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f5nq\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-kube-api-access-6f5nq\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670552 4727 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670576 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670598 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670619 4727 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670638 4727 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 09 10:52:59 crc kubenswrapper[4727]: I0109 10:52:59.670657 4727 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044113 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerID="fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" exitCode=0 Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044186 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" event={"ID":"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5","Type":"ContainerDied","Data":"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713"} Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" event={"ID":"cc8e38f0-1786-4ad3-8efc-9c04a70ceec5","Type":"ContainerDied","Data":"ddbd37f0ce66367420bf898e597290bc9a838afaf3a3a6e5e804343b2dd74136"} Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044241 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wfhcs" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.044269 4727 scope.go:117] "RemoveContainer" containerID="fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.066651 4727 scope.go:117] "RemoveContainer" containerID="fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" Jan 09 10:53:00 crc kubenswrapper[4727]: E0109 10:53:00.067554 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713\": container with ID starting with fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713 not found: ID does not exist" containerID="fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.067634 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713"} err="failed to get container status \"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713\": rpc error: code = NotFound desc = could not find container \"fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713\": container with ID starting with fb982468a5590d6c2d9fc85a2e69a53643ad327f90f5f88870ba467682712713 not found: ID does not exist" Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.090537 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.116925 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wfhcs"] Jan 09 10:53:00 crc kubenswrapper[4727]: I0109 10:53:00.869910 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" path="/var/lib/kubelet/pods/cc8e38f0-1786-4ad3-8efc-9c04a70ceec5/volumes" Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.404630 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.406910 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.407134 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.408427 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 10:53:09 crc kubenswrapper[4727]: I0109 10:53:09.408861 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90" gracePeriod=600 Jan 09 10:53:10 crc kubenswrapper[4727]: I0109 10:53:10.109227 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90" exitCode=0 Jan 09 10:53:10 crc kubenswrapper[4727]: I0109 10:53:10.109296 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90"} Jan 09 10:53:10 crc kubenswrapper[4727]: I0109 10:53:10.109572 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31"} Jan 09 10:53:10 crc kubenswrapper[4727]: I0109 10:53:10.109593 4727 scope.go:117] "RemoveContainer" containerID="21cb188ae2851533c4b375d7b739c48c7dc5d499de0e9839a0c50cb2befe9827" Jan 09 10:55:09 crc kubenswrapper[4727]: I0109 10:55:09.405164 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:55:09 crc kubenswrapper[4727]: I0109 10:55:09.406062 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:55:39 crc kubenswrapper[4727]: I0109 10:55:39.405248 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:55:39 crc kubenswrapper[4727]: I0109 10:55:39.405950 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.404856 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.405816 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.405899 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.406988 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 10:56:09 crc kubenswrapper[4727]: I0109 10:56:09.407083 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31" gracePeriod=600 Jan 09 10:56:10 crc kubenswrapper[4727]: I0109 10:56:10.417430 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31" exitCode=0 Jan 09 10:56:10 crc kubenswrapper[4727]: I0109 10:56:10.417527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31"} Jan 09 10:56:10 crc kubenswrapper[4727]: I0109 10:56:10.418036 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3"} Jan 09 10:56:10 crc kubenswrapper[4727]: I0109 10:56:10.418071 4727 scope.go:117] "RemoveContainer" containerID="26edb5414753618612f667b214c94d0b4e6188861504d8fcb15fbdbb11adaa90" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.448533 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr"] Jan 09 10:57:36 crc kubenswrapper[4727]: E0109 10:57:36.449636 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerName="registry" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.449658 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerName="registry" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.449809 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8e38f0-1786-4ad3-8efc-9c04a70ceec5" containerName="registry" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.450445 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.452593 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-x5t9g" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.454874 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.456694 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2qqks"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.458044 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.461128 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5n4rr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.461726 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.474704 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.497874 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlfjg"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.499210 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.502133 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2qqks"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.502260 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l6krn" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.505855 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlfjg"] Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.598179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnmd6\" (UniqueName: \"kubernetes.io/projected/2715d39f-d488-448b-b6f2-ff592dea195a-kube-api-access-vnmd6\") pod \"cert-manager-858654f9db-2qqks\" (UID: \"2715d39f-d488-448b-b6f2-ff592dea195a\") " pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.598310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqddj\" (UniqueName: \"kubernetes.io/projected/3a45eda8-4151-4b6c-b0f2-ab6416dc34e9-kube-api-access-vqddj\") pod \"cert-manager-cainjector-cf98fcc89-cbsgr\" (UID: \"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.598353 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hhd\" (UniqueName: \"kubernetes.io/projected/5cee0bf6-27dd-4944-bbef-574afbae1542-kube-api-access-l6hhd\") pod \"cert-manager-webhook-687f57d79b-qlfjg\" (UID: \"5cee0bf6-27dd-4944-bbef-574afbae1542\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.700212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnmd6\" (UniqueName: \"kubernetes.io/projected/2715d39f-d488-448b-b6f2-ff592dea195a-kube-api-access-vnmd6\") pod \"cert-manager-858654f9db-2qqks\" (UID: \"2715d39f-d488-448b-b6f2-ff592dea195a\") " pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.700328 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqddj\" (UniqueName: \"kubernetes.io/projected/3a45eda8-4151-4b6c-b0f2-ab6416dc34e9-kube-api-access-vqddj\") pod \"cert-manager-cainjector-cf98fcc89-cbsgr\" (UID: \"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.700368 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6hhd\" (UniqueName: \"kubernetes.io/projected/5cee0bf6-27dd-4944-bbef-574afbae1542-kube-api-access-l6hhd\") pod \"cert-manager-webhook-687f57d79b-qlfjg\" (UID: \"5cee0bf6-27dd-4944-bbef-574afbae1542\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.720399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnmd6\" (UniqueName: \"kubernetes.io/projected/2715d39f-d488-448b-b6f2-ff592dea195a-kube-api-access-vnmd6\") pod \"cert-manager-858654f9db-2qqks\" (UID: \"2715d39f-d488-448b-b6f2-ff592dea195a\") " pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.720442 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6hhd\" (UniqueName: \"kubernetes.io/projected/5cee0bf6-27dd-4944-bbef-574afbae1542-kube-api-access-l6hhd\") pod \"cert-manager-webhook-687f57d79b-qlfjg\" (UID: \"5cee0bf6-27dd-4944-bbef-574afbae1542\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.720442 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqddj\" (UniqueName: \"kubernetes.io/projected/3a45eda8-4151-4b6c-b0f2-ab6416dc34e9-kube-api-access-vqddj\") pod \"cert-manager-cainjector-cf98fcc89-cbsgr\" (UID: \"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.770867 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.781366 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2qqks" Jan 09 10:57:36 crc kubenswrapper[4727]: I0109 10:57:36.824387 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.047636 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2qqks"] Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.077296 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.090414 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr"] Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.116596 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qlfjg"] Jan 09 10:57:37 crc kubenswrapper[4727]: W0109 10:57:37.125693 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cee0bf6_27dd_4944_bbef_574afbae1542.slice/crio-32a8621eb006d81965f738822bc17177aae2fe43401716cedb7ad1650bc50fc2 WatchSource:0}: Error finding container 32a8621eb006d81965f738822bc17177aae2fe43401716cedb7ad1650bc50fc2: Status 404 returned error can't find the container with id 32a8621eb006d81965f738822bc17177aae2fe43401716cedb7ad1650bc50fc2 Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.953968 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" event={"ID":"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9","Type":"ContainerStarted","Data":"b33dd20b656d4c4d4580edb24506b53bd4d87e60bb7a09a01147e783e7f3db2b"} Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.955878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2qqks" event={"ID":"2715d39f-d488-448b-b6f2-ff592dea195a","Type":"ContainerStarted","Data":"ed0394e70c72e641dbd8d58ae215deffd337bc69141cab91e59ef79b091fd78e"} Jan 09 10:57:37 crc kubenswrapper[4727]: I0109 10:57:37.957305 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" event={"ID":"5cee0bf6-27dd-4944-bbef-574afbae1542","Type":"ContainerStarted","Data":"32a8621eb006d81965f738822bc17177aae2fe43401716cedb7ad1650bc50fc2"} Jan 09 10:57:41 crc kubenswrapper[4727]: I0109 10:57:41.984864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" event={"ID":"3a45eda8-4151-4b6c-b0f2-ab6416dc34e9","Type":"ContainerStarted","Data":"79d3135513c5bf28f04e5b1a7fda1a1222d9801038ffc7ff9944bfde65affb44"} Jan 09 10:57:41 crc kubenswrapper[4727]: I0109 10:57:41.986534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2qqks" event={"ID":"2715d39f-d488-448b-b6f2-ff592dea195a","Type":"ContainerStarted","Data":"5fc901d294e1e40692b0da336ff9523be5b9030e6f2604f82b82e99de4c0afa6"} Jan 09 10:57:41 crc kubenswrapper[4727]: I0109 10:57:41.987970 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" event={"ID":"5cee0bf6-27dd-4944-bbef-574afbae1542","Type":"ContainerStarted","Data":"2a9635efe863cde95623b36a60cf0275ad8292f2790f7447e5219732210f774d"} Jan 09 10:57:41 crc kubenswrapper[4727]: I0109 10:57:41.988132 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:42 crc kubenswrapper[4727]: I0109 10:57:42.005194 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cbsgr" podStartSLOduration=1.901223648 podStartE2EDuration="6.005168531s" podCreationTimestamp="2026-01-09 10:57:36 +0000 UTC" firstStartedPulling="2026-01-09 10:57:37.101028384 +0000 UTC m=+702.550933165" lastFinishedPulling="2026-01-09 10:57:41.204973267 +0000 UTC m=+706.654878048" observedRunningTime="2026-01-09 10:57:42.00259172 +0000 UTC m=+707.452496501" watchObservedRunningTime="2026-01-09 10:57:42.005168531 +0000 UTC m=+707.455073312" Jan 09 10:57:42 crc kubenswrapper[4727]: I0109 10:57:42.029291 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2qqks" podStartSLOduration=1.898238601 podStartE2EDuration="6.029268136s" podCreationTimestamp="2026-01-09 10:57:36 +0000 UTC" firstStartedPulling="2026-01-09 10:57:37.077004613 +0000 UTC m=+702.526909394" lastFinishedPulling="2026-01-09 10:57:41.208034148 +0000 UTC m=+706.657938929" observedRunningTime="2026-01-09 10:57:42.025973301 +0000 UTC m=+707.475878102" watchObservedRunningTime="2026-01-09 10:57:42.029268136 +0000 UTC m=+707.479172937" Jan 09 10:57:42 crc kubenswrapper[4727]: I0109 10:57:42.053778 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" podStartSLOduration=1.98423102 podStartE2EDuration="6.053758743s" podCreationTimestamp="2026-01-09 10:57:36 +0000 UTC" firstStartedPulling="2026-01-09 10:57:37.129075396 +0000 UTC m=+702.578980177" lastFinishedPulling="2026-01-09 10:57:41.198603129 +0000 UTC m=+706.648507900" observedRunningTime="2026-01-09 10:57:42.049252651 +0000 UTC m=+707.499157442" watchObservedRunningTime="2026-01-09 10:57:42.053758743 +0000 UTC m=+707.503663534" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.075940 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngngm"] Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079048 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-controller" containerID="cri-o://abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079284 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-acl-logging" containerID="cri-o://537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079133 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079170 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-node" containerID="cri-o://2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079221 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="northd" containerID="cri-o://ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079226 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="sbdb" containerID="cri-o://74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.079819 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="nbdb" containerID="cri-o://9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.129931 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" containerID="cri-o://38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" gracePeriod=30 Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.425745 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.429707 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovn-acl-logging/0.log" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.430723 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovn-controller/0.log" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.432304 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462543 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462629 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462654 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462709 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462696 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462823 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462811 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462855 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462882 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462832 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462887 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462911 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462930 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462907 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket" (OuterVolumeSpecName: "log-socket") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462962 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462977 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462852 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462999 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash" (OuterVolumeSpecName: "host-slash") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.462926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463093 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463021 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463065 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463117 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463142 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463188 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463197 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463209 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") pod \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\" (UID: \"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40\") " Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log" (OuterVolumeSpecName: "node-log") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463363 4727 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463379 4727 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-log-socket\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463391 4727 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463404 4727 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463414 4727 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463426 4727 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463436 4727 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-slash\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463445 4727 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463455 4727 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463465 4727 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463475 4727 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-node-log\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463484 4727 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463496 4727 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463358 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463607 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.463680 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.464499 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.477407 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl" (OuterVolumeSpecName: "kube-api-access-d4rgl") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "kube-api-access-d4rgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.478090 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.487426 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" (UID: "33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.501792 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sgflm"] Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502259 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502289 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502303 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="sbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502312 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="sbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502326 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="northd" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502337 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="northd" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502349 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502358 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502372 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="nbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502378 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="nbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502389 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502395 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502404 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502410 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502419 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-acl-logging" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502425 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-acl-logging" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502433 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-node" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502440 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-node" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502455 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502462 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502469 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kubecfg-setup" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502475 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kubecfg-setup" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502619 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-ovn-metrics" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502635 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="sbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502642 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-acl-logging" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502650 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="northd" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502659 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502667 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovn-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502673 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502683 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502691 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502700 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502709 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="kube-rbac-proxy-node" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502719 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="nbdb" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502833 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502841 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: E0109 10:57:46.502850 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.502856 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerName="ovnkube-controller" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.504816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.563938 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovn-node-metrics-cert\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.564180 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-systemd-units\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.564249 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-env-overrides\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.564316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-node-log\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.564872 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-ovn\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565007 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-etc-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565043 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-netd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565183 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-netns\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565306 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v9sb\" (UniqueName: \"kubernetes.io/projected/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-kube-api-access-4v9sb\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565390 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565590 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-kubelet\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-slash\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565675 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-config\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565757 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-bin\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565857 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-var-lib-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-log-socket\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565908 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-script-lib\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.565980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-systemd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566257 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566609 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566633 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566645 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4rgl\" (UniqueName: \"kubernetes.io/projected/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-kube-api-access-d4rgl\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566690 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566707 4727 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566723 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.566757 4727 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.667838 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-systemd-units\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.667957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-env-overrides\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.667996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-node-log\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668108 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-node-log\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668185 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-ovn\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668188 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-ovn\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668253 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-etc-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-netd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668292 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-netns\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v9sb\" (UniqueName: \"kubernetes.io/projected/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-kube-api-access-4v9sb\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668361 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-kubelet\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-slash\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668380 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-netns\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668438 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-config\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-bin\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-var-lib-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-log-socket\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668573 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-script-lib\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-systemd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovn-node-metrics-cert\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668812 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-slash\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668826 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-var-lib-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668845 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-kubelet\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668861 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-log-socket\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-systemd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668890 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-bin\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668842 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-etc-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-env-overrides\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.668928 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-run-openvswitch\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.669070 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-host-cni-netd\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.669149 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-systemd-units\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.669561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-config\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.669622 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovnkube-script-lib\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.673193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-ovn-node-metrics-cert\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.686192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v9sb\" (UniqueName: \"kubernetes.io/projected/dbb43a9b-cf31-4705-9d1e-0447d2520ef6-kube-api-access-4v9sb\") pod \"ovnkube-node-sgflm\" (UID: \"dbb43a9b-cf31-4705-9d1e-0447d2520ef6\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.820232 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:46 crc kubenswrapper[4727]: I0109 10:57:46.828590 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-qlfjg" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.019354 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovnkube-controller/3.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.021501 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovn-acl-logging/0.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.022031 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ngngm_33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/ovn-controller/0.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023024 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023054 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023071 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023082 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023090 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023097 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023105 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" exitCode=143 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023114 4727 generic.go:334] "Generic (PLEG): container finished" podID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" exitCode=143 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023208 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023237 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023396 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023416 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023429 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023442 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023453 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023459 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023465 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023471 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023477 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023484 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023490 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023499 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023545 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023560 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023569 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023577 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023584 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023590 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023596 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023602 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023608 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023613 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023618 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023636 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023642 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023647 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023652 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023658 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023666 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023672 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023678 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023684 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023689 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" event={"ID":"33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40","Type":"ContainerDied","Data":"597bf577b4dba1cd023402df59b74489eabbea859cbd226bb31e4a5aff2c01fc"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023705 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023711 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023718 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023724 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023729 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023735 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023740 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023746 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023751 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023756 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.023827 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngngm" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042104 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/2.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042537 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/1.log" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042576 4727 generic.go:334] "Generic (PLEG): container finished" podID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" containerID="dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08" exitCode=2 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerDied","Data":"dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.042662 4727 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.043725 4727 scope.go:117] "RemoveContainer" containerID="dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.044185 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-57zpr_openshift-multus(f0230d78-c2b3-4a02-8243-6b39e8eecb90)\"" pod="openshift-multus/multus-57zpr" podUID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.052685 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerDied","Data":"126918a79692264b592239126cfbf4ecf54be1f24564a8c81bcc09429ded42ae"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.052638 4727 generic.go:334] "Generic (PLEG): container finished" podID="dbb43a9b-cf31-4705-9d1e-0447d2520ef6" containerID="126918a79692264b592239126cfbf4ecf54be1f24564a8c81bcc09429ded42ae" exitCode=0 Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.052875 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"9ba0a3778a79450334ce9ba2bbaf2db984b061ad3b6e8325cce6aaf29770eddf"} Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.100822 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.130956 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngngm"] Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.136076 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngngm"] Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.136265 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.171313 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.186364 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.202877 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.218109 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.231827 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.270444 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.292580 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.329795 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.330469 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.330696 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.330735 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.331150 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.331176 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} err="failed to get container status \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.331195 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.331873 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.331941 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} err="failed to get container status \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.331988 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.332538 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.332601 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} err="failed to get container status \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.332638 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.333195 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.333259 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} err="failed to get container status \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.333288 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.333749 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.333793 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} err="failed to get container status \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.333811 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.334197 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.334237 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} err="failed to get container status \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.334265 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.334621 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.334660 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} err="failed to get container status \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.334684 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.334989 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335016 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} err="failed to get container status \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335032 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: E0109 10:57:47.335347 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335377 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} err="failed to get container status \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335394 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335858 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.335876 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336192 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} err="failed to get container status \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336224 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336597 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} err="failed to get container status \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336616 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336870 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} err="failed to get container status \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.336905 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.337278 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} err="failed to get container status \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.337302 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.337699 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} err="failed to get container status \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.337723 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338094 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} err="failed to get container status \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338121 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338385 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} err="failed to get container status \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338442 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338814 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} err="failed to get container status \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.338839 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.339133 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} err="failed to get container status \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.339156 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.339663 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.339683 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340196 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} err="failed to get container status \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340230 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340589 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} err="failed to get container status \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340612 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340870 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} err="failed to get container status \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.340898 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.341142 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} err="failed to get container status \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.341166 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.341612 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} err="failed to get container status \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.341635 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342011 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} err="failed to get container status \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342033 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342336 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} err="failed to get container status \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342365 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342767 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} err="failed to get container status \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.342796 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343047 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} err="failed to get container status \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343069 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343337 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343356 4727 scope.go:117] "RemoveContainer" containerID="4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343923 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234"} err="failed to get container status \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": rpc error: code = NotFound desc = could not find container \"4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234\": container with ID starting with 4b9201708938162ca642b76bf88cf7b6762e49eedc6f11d3fc7db84f181a8234 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.343940 4727 scope.go:117] "RemoveContainer" containerID="74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.344295 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0"} err="failed to get container status \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": rpc error: code = NotFound desc = could not find container \"74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0\": container with ID starting with 74c20427b8afd660b8b8dbaa4a9b8f293ff106d83c139cf37d63d1cfd4a580e0 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.344317 4727 scope.go:117] "RemoveContainer" containerID="9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.344718 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3"} err="failed to get container status \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": rpc error: code = NotFound desc = could not find container \"9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3\": container with ID starting with 9bbcde509bfca3d01c26238dd7a4e571035d5745b254a4c4f473739f4e6918a3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.344736 4727 scope.go:117] "RemoveContainer" containerID="ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345024 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3"} err="failed to get container status \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": rpc error: code = NotFound desc = could not find container \"ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3\": container with ID starting with ed89d36e0bf9ad08b0babc4f7490589eb7d46faf320b725e83b0a34addef66f3 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345043 4727 scope.go:117] "RemoveContainer" containerID="a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345290 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313"} err="failed to get container status \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": rpc error: code = NotFound desc = could not find container \"a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313\": container with ID starting with a40acdec3a0b41f5f04cb228abae30a0018c7666c7e7f8969f404e54f76b6313 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345315 4727 scope.go:117] "RemoveContainer" containerID="2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345584 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074"} err="failed to get container status \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": rpc error: code = NotFound desc = could not find container \"2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074\": container with ID starting with 2743a5bfdd5d1d499bde8ffec709b53831aa596298e6606d045641c4eac24074 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345617 4727 scope.go:117] "RemoveContainer" containerID="537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345892 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360"} err="failed to get container status \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": rpc error: code = NotFound desc = could not find container \"537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360\": container with ID starting with 537bd30ea451744dedc6223a8e0363e066aa4f184c930f20f0259d66570e9360 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.345920 4727 scope.go:117] "RemoveContainer" containerID="abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346166 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861"} err="failed to get container status \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": rpc error: code = NotFound desc = could not find container \"abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861\": container with ID starting with abf2f5711bd6ba74571025eb11d6b8ab491c5ea709432bad40a3cca0428ad861 not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346185 4727 scope.go:117] "RemoveContainer" containerID="e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346501 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f"} err="failed to get container status \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": rpc error: code = NotFound desc = could not find container \"e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f\": container with ID starting with e8e44e7cb8b091fe1ab65a170b0a9277e2ba2c6aa2ad9c4d4de4ecca813d348f not found: ID does not exist" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346533 4727 scope.go:117] "RemoveContainer" containerID="38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2" Jan 09 10:57:47 crc kubenswrapper[4727]: I0109 10:57:47.346838 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2"} err="failed to get container status \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": rpc error: code = NotFound desc = could not find container \"38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2\": container with ID starting with 38cd6fa013591d70bb0d303a110dbc5fbc40683b73b6a6c0cc2a9fde8811e4e2 not found: ID does not exist" Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064206 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"cc86c5a2adf97170714efae5e4a9dbeb3ade1a2a2f330bcc7c5e63899dd38085"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064616 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"18ff2d4c2fc9a21816afcdb8664f3f354d174ec4f28d56b4129d2d2f54d86fac"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"f929a9be47ad2af0147e428696085c4a248cfdb8be709778bff92346d93e1be1"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064652 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"f9b81f27cc75f27204bce6e56eeb1eb194252ccdf09bc3662711efe3184e517a"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064669 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"fca87e3dc1a22a45db30000b59b85a55e2acecdb2d0c88a0aab738c0275f3a47"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.064683 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"96c5486c198d439ff658afed4a3e5a9d006323c69712c441b637ead0840b8c7a"} Jan 09 10:57:48 crc kubenswrapper[4727]: I0109 10:57:48.867834 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40" path="/var/lib/kubelet/pods/33bb3d7e-6f5b-4a7b-b2c7-b04fb8e20e40/volumes" Jan 09 10:57:51 crc kubenswrapper[4727]: I0109 10:57:51.099554 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"197bded7de6d4124ea1df8cf7d8ae446c4892e997e088b615337bc9a8a502bf4"} Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.116962 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" event={"ID":"dbb43a9b-cf31-4705-9d1e-0447d2520ef6","Type":"ContainerStarted","Data":"3f6336f513cdb444dfdeac4313fa3385bf0c9a10ad2dcc94f05b26c43409b9d3"} Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.117454 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.117492 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.117532 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.149724 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.158553 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" podStartSLOduration=7.158533404 podStartE2EDuration="7.158533404s" podCreationTimestamp="2026-01-09 10:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:57:53.149977853 +0000 UTC m=+718.599882644" watchObservedRunningTime="2026-01-09 10:57:53.158533404 +0000 UTC m=+718.608438185" Jan 09 10:57:53 crc kubenswrapper[4727]: I0109 10:57:53.167465 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:57:55 crc kubenswrapper[4727]: I0109 10:57:55.200414 4727 scope.go:117] "RemoveContainer" containerID="82e65dc4dd21ab3d5aafed8aa6bdd0bc054a950416d4b95f41dd2d05007692bd" Jan 09 10:57:56 crc kubenswrapper[4727]: I0109 10:57:56.144616 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/2.log" Jan 09 10:57:59 crc kubenswrapper[4727]: I0109 10:57:59.860053 4727 scope.go:117] "RemoveContainer" containerID="dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08" Jan 09 10:57:59 crc kubenswrapper[4727]: E0109 10:57:59.860922 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-57zpr_openshift-multus(f0230d78-c2b3-4a02-8243-6b39e8eecb90)\"" pod="openshift-multus/multus-57zpr" podUID="f0230d78-c2b3-4a02-8243-6b39e8eecb90" Jan 09 10:58:09 crc kubenswrapper[4727]: I0109 10:58:09.405656 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:58:09 crc kubenswrapper[4727]: I0109 10:58:09.406448 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:58:10 crc kubenswrapper[4727]: I0109 10:58:10.860032 4727 scope.go:117] "RemoveContainer" containerID="dcc87b085e5049139f65818e8721373757900c5026b6c14989fb821a7185df08" Jan 09 10:58:12 crc kubenswrapper[4727]: I0109 10:58:12.246252 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-57zpr_f0230d78-c2b3-4a02-8243-6b39e8eecb90/kube-multus/2.log" Jan 09 10:58:12 crc kubenswrapper[4727]: I0109 10:58:12.246923 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-57zpr" event={"ID":"f0230d78-c2b3-4a02-8243-6b39e8eecb90","Type":"ContainerStarted","Data":"9d3cd3d06b0c9e101ffd0febe37ef5a4cfde2cca5e75c9f3f4c24060cd039932"} Jan 09 10:58:16 crc kubenswrapper[4727]: I0109 10:58:16.847923 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgflm" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.177795 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9"] Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.179972 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.182419 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.188208 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9"] Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.200914 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.200989 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.201016 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.301998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.302058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.302117 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.302656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.303052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.325138 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.500955 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:28 crc kubenswrapper[4727]: I0109 10:58:28.723217 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9"] Jan 09 10:58:28 crc kubenswrapper[4727]: W0109 10:58:28.727744 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb997fa3_0e55_46ca_b666_d4b710fe2bef.slice/crio-0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a WatchSource:0}: Error finding container 0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a: Status 404 returned error can't find the container with id 0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a Jan 09 10:58:29 crc kubenswrapper[4727]: I0109 10:58:29.640340 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerID="68a19a8966a90aaacb3c61d973589a87d1c5429eab6039f0a54b20ac0b9be5bf" exitCode=0 Jan 09 10:58:29 crc kubenswrapper[4727]: I0109 10:58:29.640425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerDied","Data":"68a19a8966a90aaacb3c61d973589a87d1c5429eab6039f0a54b20ac0b9be5bf"} Jan 09 10:58:29 crc kubenswrapper[4727]: I0109 10:58:29.640529 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerStarted","Data":"0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a"} Jan 09 10:58:31 crc kubenswrapper[4727]: I0109 10:58:31.653883 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerID="1f011cad76375d514a22721bf83e8135db90dfe7477723ba431b56651935ae2e" exitCode=0 Jan 09 10:58:31 crc kubenswrapper[4727]: I0109 10:58:31.653942 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerDied","Data":"1f011cad76375d514a22721bf83e8135db90dfe7477723ba431b56651935ae2e"} Jan 09 10:58:32 crc kubenswrapper[4727]: I0109 10:58:32.663606 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerID="4de084d41b428c101bfd2216e77e4024d1c53bd2397c213a6dbcdc1ac632fa67" exitCode=0 Jan 09 10:58:32 crc kubenswrapper[4727]: I0109 10:58:32.663678 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerDied","Data":"4de084d41b428c101bfd2216e77e4024d1c53bd2397c213a6dbcdc1ac632fa67"} Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.899734 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.985683 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") pod \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.985842 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") pod \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.985877 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") pod \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\" (UID: \"fb997fa3-0e55-46ca-b666-d4b710fe2bef\") " Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.986801 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle" (OuterVolumeSpecName: "bundle") pod "fb997fa3-0e55-46ca-b666-d4b710fe2bef" (UID: "fb997fa3-0e55-46ca-b666-d4b710fe2bef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:58:33 crc kubenswrapper[4727]: I0109 10:58:33.993002 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j" (OuterVolumeSpecName: "kube-api-access-dn88j") pod "fb997fa3-0e55-46ca-b666-d4b710fe2bef" (UID: "fb997fa3-0e55-46ca-b666-d4b710fe2bef"). InnerVolumeSpecName "kube-api-access-dn88j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.000180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util" (OuterVolumeSpecName: "util") pod "fb997fa3-0e55-46ca-b666-d4b710fe2bef" (UID: "fb997fa3-0e55-46ca-b666-d4b710fe2bef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.087478 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.087584 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn88j\" (UniqueName: \"kubernetes.io/projected/fb997fa3-0e55-46ca-b666-d4b710fe2bef-kube-api-access-dn88j\") on node \"crc\" DevicePath \"\"" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.087601 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb997fa3-0e55-46ca-b666-d4b710fe2bef-util\") on node \"crc\" DevicePath \"\"" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.680833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" event={"ID":"fb997fa3-0e55-46ca-b666-d4b710fe2bef","Type":"ContainerDied","Data":"0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a"} Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.681788 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f158e267f991922410a647cda66c76149c8ec014f949c80732cd4bd7db7be3a" Jan 09 10:58:34 crc kubenswrapper[4727]: I0109 10:58:34.680968 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.496836 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p86wv"] Jan 09 10:58:36 crc kubenswrapper[4727]: E0109 10:58:36.497184 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="util" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.497200 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="util" Jan 09 10:58:36 crc kubenswrapper[4727]: E0109 10:58:36.497275 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="pull" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.497283 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="pull" Jan 09 10:58:36 crc kubenswrapper[4727]: E0109 10:58:36.497302 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="extract" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.497309 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="extract" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.497434 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb997fa3-0e55-46ca-b666-d4b710fe2bef" containerName="extract" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.498068 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.500826 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.501094 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jfh6k" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.501304 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.514048 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p86wv"] Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.629730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlvf\" (UniqueName: \"kubernetes.io/projected/b4c7550e-1eaa-4e85-b44d-c752f6e37955-kube-api-access-mvlvf\") pod \"nmstate-operator-6769fb99d-p86wv\" (UID: \"b4c7550e-1eaa-4e85-b44d-c752f6e37955\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.731330 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlvf\" (UniqueName: \"kubernetes.io/projected/b4c7550e-1eaa-4e85-b44d-c752f6e37955-kube-api-access-mvlvf\") pod \"nmstate-operator-6769fb99d-p86wv\" (UID: \"b4c7550e-1eaa-4e85-b44d-c752f6e37955\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.751297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlvf\" (UniqueName: \"kubernetes.io/projected/b4c7550e-1eaa-4e85-b44d-c752f6e37955-kube-api-access-mvlvf\") pod \"nmstate-operator-6769fb99d-p86wv\" (UID: \"b4c7550e-1eaa-4e85-b44d-c752f6e37955\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:36 crc kubenswrapper[4727]: I0109 10:58:36.815756 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" Jan 09 10:58:37 crc kubenswrapper[4727]: I0109 10:58:37.022957 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-p86wv"] Jan 09 10:58:37 crc kubenswrapper[4727]: I0109 10:58:37.700088 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" event={"ID":"b4c7550e-1eaa-4e85-b44d-c752f6e37955","Type":"ContainerStarted","Data":"e5d9d507c977c1a136a3db9ca4e1875e60ef08f63cb83d834b263ea6d75131c8"} Jan 09 10:58:39 crc kubenswrapper[4727]: I0109 10:58:39.405560 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:58:39 crc kubenswrapper[4727]: I0109 10:58:39.405899 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:58:40 crc kubenswrapper[4727]: I0109 10:58:40.719535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" event={"ID":"b4c7550e-1eaa-4e85-b44d-c752f6e37955","Type":"ContainerStarted","Data":"d81a9d168758de43d8a35522ca8bbb7ddeecaac756eb9506fa3e39002f9d5635"} Jan 09 10:58:40 crc kubenswrapper[4727]: I0109 10:58:40.746489 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-p86wv" podStartSLOduration=1.777079177 podStartE2EDuration="4.746459846s" podCreationTimestamp="2026-01-09 10:58:36 +0000 UTC" firstStartedPulling="2026-01-09 10:58:37.040380185 +0000 UTC m=+762.490284966" lastFinishedPulling="2026-01-09 10:58:40.009760854 +0000 UTC m=+765.459665635" observedRunningTime="2026-01-09 10:58:40.743441468 +0000 UTC m=+766.193346249" watchObservedRunningTime="2026-01-09 10:58:40.746459846 +0000 UTC m=+766.196364657" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.796744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.798207 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.809734 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-5lc88"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.810635 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.812962 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.818106 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.819446 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-n8254" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.834814 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-5lc88"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.862970 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4757d"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.863907 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907324 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnvsb\" (UniqueName: \"kubernetes.io/projected/673fefde-8c1b-46fe-a88a-00b3fa962a3e-kube-api-access-dnvsb\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vd78\" (UniqueName: \"kubernetes.io/projected/0683f840-0540-443e-8f9d-123b701acbd7-kube-api-access-9vd78\") pod \"nmstate-metrics-7f7f7578db-txtbd\" (UID: \"0683f840-0540-443e-8f9d-123b701acbd7\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907462 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-nmstate-lock\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907484 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqcr\" (UniqueName: \"kubernetes.io/projected/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-kube-api-access-bbqcr\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-ovs-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.907699 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-dbus-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.965735 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn"] Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.966595 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.969686 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.969977 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-r7g68" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.971033 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 09 10:58:41 crc kubenswrapper[4727]: I0109 10:58:41.983626 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn"] Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnvsb\" (UniqueName: \"kubernetes.io/projected/673fefde-8c1b-46fe-a88a-00b3fa962a3e-kube-api-access-dnvsb\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009147 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blsng\" (UniqueName: \"kubernetes.io/projected/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-kube-api-access-blsng\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vd78\" (UniqueName: \"kubernetes.io/projected/0683f840-0540-443e-8f9d-123b701acbd7-kube-api-access-9vd78\") pod \"nmstate-metrics-7f7f7578db-txtbd\" (UID: \"0683f840-0540-443e-8f9d-123b701acbd7\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-nmstate-lock\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009229 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqcr\" (UniqueName: \"kubernetes.io/projected/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-kube-api-access-bbqcr\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009281 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-ovs-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009361 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-dbus-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.009821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-dbus-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.010305 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-nmstate-lock\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.010449 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/673fefde-8c1b-46fe-a88a-00b3fa962a3e-ovs-socket\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: E0109 10:58:42.010534 4727 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 09 10:58:42 crc kubenswrapper[4727]: E0109 10:58:42.010599 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair podName:7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac nodeName:}" failed. No retries permitted until 2026-01-09 10:58:42.510575416 +0000 UTC m=+767.960480227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair") pod "nmstate-webhook-f8fb84555-5lc88" (UID: "7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac") : secret "openshift-nmstate-webhook" not found Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.032156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnvsb\" (UniqueName: \"kubernetes.io/projected/673fefde-8c1b-46fe-a88a-00b3fa962a3e-kube-api-access-dnvsb\") pod \"nmstate-handler-4757d\" (UID: \"673fefde-8c1b-46fe-a88a-00b3fa962a3e\") " pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.032393 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqcr\" (UniqueName: \"kubernetes.io/projected/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-kube-api-access-bbqcr\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.032436 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vd78\" (UniqueName: \"kubernetes.io/projected/0683f840-0540-443e-8f9d-123b701acbd7-kube-api-access-9vd78\") pod \"nmstate-metrics-7f7f7578db-txtbd\" (UID: \"0683f840-0540-443e-8f9d-123b701acbd7\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.110539 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.111048 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.111112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blsng\" (UniqueName: \"kubernetes.io/projected/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-kube-api-access-blsng\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: E0109 10:58:42.111575 4727 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 09 10:58:42 crc kubenswrapper[4727]: E0109 10:58:42.111634 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert podName:9721a7da-2c8a-4a0d-ac56-8b4b11c028cd nodeName:}" failed. No retries permitted until 2026-01-09 10:58:42.611620886 +0000 UTC m=+768.061525667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-6dwzn" (UID: "9721a7da-2c8a-4a0d-ac56-8b4b11c028cd") : secret "plugin-serving-cert" not found Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.111579 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.120050 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.139555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blsng\" (UniqueName: \"kubernetes.io/projected/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-kube-api-access-blsng\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.178981 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64db668f99-2zfcx"] Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.181790 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.182018 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.210087 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64db668f99-2zfcx"] Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214127 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-trusted-ca-bundle\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214200 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214250 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72tsv\" (UniqueName: \"kubernetes.io/projected/fb2c8fec-8292-49e4-967f-ac24fe73971b-kube-api-access-72tsv\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214309 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-oauth-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214334 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-service-ca\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214368 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-oauth-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.214386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: W0109 10:58:42.234951 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod673fefde_8c1b_46fe_a88a_00b3fa962a3e.slice/crio-8d2e034f78f7d0a9fa596e50a75669d5545aa18a3f1860d7e079793d86ee3839 WatchSource:0}: Error finding container 8d2e034f78f7d0a9fa596e50a75669d5545aa18a3f1860d7e079793d86ee3839: Status 404 returned error can't find the container with id 8d2e034f78f7d0a9fa596e50a75669d5545aa18a3f1860d7e079793d86ee3839 Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.318447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.318627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-trusted-ca-bundle\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.318786 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.318981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72tsv\" (UniqueName: \"kubernetes.io/projected/fb2c8fec-8292-49e4-967f-ac24fe73971b-kube-api-access-72tsv\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.319187 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-oauth-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.319242 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-service-ca\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.319296 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-oauth-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.319696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.320253 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-oauth-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.321421 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-service-ca\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.323198 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2c8fec-8292-49e4-967f-ac24fe73971b-trusted-ca-bundle\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.330574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-serving-cert\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.332275 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb2c8fec-8292-49e4-967f-ac24fe73971b-console-oauth-config\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.346381 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72tsv\" (UniqueName: \"kubernetes.io/projected/fb2c8fec-8292-49e4-967f-ac24fe73971b-kube-api-access-72tsv\") pod \"console-64db668f99-2zfcx\" (UID: \"fb2c8fec-8292-49e4-967f-ac24fe73971b\") " pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.504213 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.523601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.534577 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-5lc88\" (UID: \"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.625025 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.629918 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9721a7da-2c8a-4a0d-ac56-8b4b11c028cd-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-6dwzn\" (UID: \"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.736079 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.747454 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4757d" event={"ID":"673fefde-8c1b-46fe-a88a-00b3fa962a3e","Type":"ContainerStarted","Data":"8d2e034f78f7d0a9fa596e50a75669d5545aa18a3f1860d7e079793d86ee3839"} Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.883628 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.922608 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd"] Jan 09 10:58:42 crc kubenswrapper[4727]: I0109 10:58:42.962712 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64db668f99-2zfcx"] Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.485676 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-5lc88"] Jan 09 10:58:43 crc kubenswrapper[4727]: W0109 10:58:43.503188 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b8d8f1f_d4d5_4716_818f_6f5bbf6a2dac.slice/crio-e767cba68d16d27530eb20baa64d13bae947a60c4fefef1337ac2ed83d3d90db WatchSource:0}: Error finding container e767cba68d16d27530eb20baa64d13bae947a60c4fefef1337ac2ed83d3d90db: Status 404 returned error can't find the container with id e767cba68d16d27530eb20baa64d13bae947a60c4fefef1337ac2ed83d3d90db Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.756607 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" event={"ID":"0683f840-0540-443e-8f9d-123b701acbd7","Type":"ContainerStarted","Data":"bc5b232de035f7830cbd1039e4b37013034cb3ea57a653dd083955da8a69096e"} Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.757715 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn"] Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.758575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64db668f99-2zfcx" event={"ID":"fb2c8fec-8292-49e4-967f-ac24fe73971b","Type":"ContainerStarted","Data":"98c3ef45797a650b0546861b0d1a903076f2352aa78d24e8ca67f2a3bbb45410"} Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.758615 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64db668f99-2zfcx" event={"ID":"fb2c8fec-8292-49e4-967f-ac24fe73971b","Type":"ContainerStarted","Data":"c3c31376968e59b6b95ea898756e26082245575dcb97b06007bdadd2d79eebb0"} Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.760304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" event={"ID":"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac","Type":"ContainerStarted","Data":"e767cba68d16d27530eb20baa64d13bae947a60c4fefef1337ac2ed83d3d90db"} Jan 09 10:58:43 crc kubenswrapper[4727]: W0109 10:58:43.762720 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9721a7da_2c8a_4a0d_ac56_8b4b11c028cd.slice/crio-2ce36b158eac6619050fefe26e7240a11c51ac43b6cf560cd201773ecea772e9 WatchSource:0}: Error finding container 2ce36b158eac6619050fefe26e7240a11c51ac43b6cf560cd201773ecea772e9: Status 404 returned error can't find the container with id 2ce36b158eac6619050fefe26e7240a11c51ac43b6cf560cd201773ecea772e9 Jan 09 10:58:43 crc kubenswrapper[4727]: I0109 10:58:43.781935 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64db668f99-2zfcx" podStartSLOduration=1.781900937 podStartE2EDuration="1.781900937s" podCreationTimestamp="2026-01-09 10:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 10:58:43.7796858 +0000 UTC m=+769.229590581" watchObservedRunningTime="2026-01-09 10:58:43.781900937 +0000 UTC m=+769.231805718" Jan 09 10:58:44 crc kubenswrapper[4727]: I0109 10:58:44.769560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" event={"ID":"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd","Type":"ContainerStarted","Data":"2ce36b158eac6619050fefe26e7240a11c51ac43b6cf560cd201773ecea772e9"} Jan 09 10:58:45 crc kubenswrapper[4727]: I0109 10:58:45.995246 4727 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 09 10:58:47 crc kubenswrapper[4727]: I0109 10:58:47.884380 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" event={"ID":"0683f840-0540-443e-8f9d-123b701acbd7","Type":"ContainerStarted","Data":"629ae3ccbc0688940e7c4e521882edb3ef170568626e7791955c7debe0e89daf"} Jan 09 10:58:47 crc kubenswrapper[4727]: I0109 10:58:47.887306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" event={"ID":"7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac","Type":"ContainerStarted","Data":"36c025d7a98dd5a684fac187ba986b40ef54701d9c86e601c187476b41e3647e"} Jan 09 10:58:47 crc kubenswrapper[4727]: I0109 10:58:47.887452 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:58:47 crc kubenswrapper[4727]: I0109 10:58:47.913137 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" podStartSLOduration=2.7806615949999998 podStartE2EDuration="6.913113732s" podCreationTimestamp="2026-01-09 10:58:41 +0000 UTC" firstStartedPulling="2026-01-09 10:58:43.516803213 +0000 UTC m=+768.966707994" lastFinishedPulling="2026-01-09 10:58:47.64925533 +0000 UTC m=+773.099160131" observedRunningTime="2026-01-09 10:58:47.903823141 +0000 UTC m=+773.353727962" watchObservedRunningTime="2026-01-09 10:58:47.913113732 +0000 UTC m=+773.363018533" Jan 09 10:58:48 crc kubenswrapper[4727]: I0109 10:58:48.896970 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4757d" event={"ID":"673fefde-8c1b-46fe-a88a-00b3fa962a3e","Type":"ContainerStarted","Data":"f2c8e8daa9a45a5ead4a602cefae7bb4736e062dff1e9b891726842cf0403173"} Jan 09 10:58:48 crc kubenswrapper[4727]: I0109 10:58:48.897152 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:48 crc kubenswrapper[4727]: I0109 10:58:48.918495 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4757d" podStartSLOduration=2.528554147 podStartE2EDuration="7.918451711s" podCreationTimestamp="2026-01-09 10:58:41 +0000 UTC" firstStartedPulling="2026-01-09 10:58:42.237560581 +0000 UTC m=+767.687465362" lastFinishedPulling="2026-01-09 10:58:47.627458145 +0000 UTC m=+773.077362926" observedRunningTime="2026-01-09 10:58:48.916765536 +0000 UTC m=+774.366670347" watchObservedRunningTime="2026-01-09 10:58:48.918451711 +0000 UTC m=+774.368356492" Jan 09 10:58:50 crc kubenswrapper[4727]: I0109 10:58:50.022910 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" event={"ID":"9721a7da-2c8a-4a0d-ac56-8b4b11c028cd","Type":"ContainerStarted","Data":"9bcbfcbca46344634fa168dd8d5e003cfd46a50e16d3456b4087bd6626cb9232"} Jan 09 10:58:50 crc kubenswrapper[4727]: I0109 10:58:50.055682 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-6dwzn" podStartSLOduration=4.030069872 podStartE2EDuration="9.055649529s" podCreationTimestamp="2026-01-09 10:58:41 +0000 UTC" firstStartedPulling="2026-01-09 10:58:43.770796919 +0000 UTC m=+769.220701700" lastFinishedPulling="2026-01-09 10:58:48.796376576 +0000 UTC m=+774.246281357" observedRunningTime="2026-01-09 10:58:50.038427863 +0000 UTC m=+775.488332654" watchObservedRunningTime="2026-01-09 10:58:50.055649529 +0000 UTC m=+775.505554310" Jan 09 10:58:51 crc kubenswrapper[4727]: I0109 10:58:51.032151 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" event={"ID":"0683f840-0540-443e-8f9d-123b701acbd7","Type":"ContainerStarted","Data":"ccc9a181f2c0c897bc98ffd67ee62069007e4214f085cbf72f7f1d64cd7cfb01"} Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.225238 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4757d" Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.269137 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-txtbd" podStartSLOduration=4.00799404 podStartE2EDuration="11.269115115s" podCreationTimestamp="2026-01-09 10:58:41 +0000 UTC" firstStartedPulling="2026-01-09 10:58:42.930751876 +0000 UTC m=+768.380656667" lastFinishedPulling="2026-01-09 10:58:50.191872961 +0000 UTC m=+775.641777742" observedRunningTime="2026-01-09 10:58:51.065139926 +0000 UTC m=+776.515044727" watchObservedRunningTime="2026-01-09 10:58:52.269115115 +0000 UTC m=+777.719019896" Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.505204 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.505265 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:52 crc kubenswrapper[4727]: I0109 10:58:52.512088 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:53 crc kubenswrapper[4727]: I0109 10:58:53.048402 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64db668f99-2zfcx" Jan 09 10:58:53 crc kubenswrapper[4727]: I0109 10:58:53.112758 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:59:02 crc kubenswrapper[4727]: I0109 10:59:02.743019 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-5lc88" Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.405466 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.406124 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.406172 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.406965 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 10:59:09 crc kubenswrapper[4727]: I0109 10:59:09.407021 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3" gracePeriod=600 Jan 09 10:59:10 crc kubenswrapper[4727]: I0109 10:59:10.163353 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3" exitCode=0 Jan 09 10:59:10 crc kubenswrapper[4727]: I0109 10:59:10.163433 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3"} Jan 09 10:59:10 crc kubenswrapper[4727]: I0109 10:59:10.163876 4727 scope.go:117] "RemoveContainer" containerID="fb441083f3f5e8ca04b59b61becd3d603982c90624c220dc9b4e5ca242fd7a31" Jan 09 10:59:11 crc kubenswrapper[4727]: I0109 10:59:11.173492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639"} Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.718790 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4"] Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.720836 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.724064 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.736926 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4"] Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.835617 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.835739 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.835801 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.937907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.937996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.938089 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.939428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.939475 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:17 crc kubenswrapper[4727]: I0109 10:59:17.962702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.061742 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.172960 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-pjc7c" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" containerID="cri-o://3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" gracePeriod=15 Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.654452 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4"] Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.685401 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pjc7c_bab7ad75-cb15-4910-a013-e9cafba90f73/console/0.log" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.685496 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778211 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778272 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778450 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.778500 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") pod \"bab7ad75-cb15-4910-a013-e9cafba90f73\" (UID: \"bab7ad75-cb15-4910-a013-e9cafba90f73\") " Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.779669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.779694 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config" (OuterVolumeSpecName: "console-config") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.779995 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.780023 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca" (OuterVolumeSpecName: "service-ca") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.785823 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.785868 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r" (OuterVolumeSpecName: "kube-api-access-4gr6r") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "kube-api-access-4gr6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.787410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bab7ad75-cb15-4910-a013-e9cafba90f73" (UID: "bab7ad75-cb15-4910-a013-e9cafba90f73"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884363 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884407 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gr6r\" (UniqueName: \"kubernetes.io/projected/bab7ad75-cb15-4910-a013-e9cafba90f73-kube-api-access-4gr6r\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884422 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-service-ca\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884435 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-console-config\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884445 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884458 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab7ad75-cb15-4910-a013-e9cafba90f73-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:18 crc kubenswrapper[4727]: I0109 10:59:18.884468 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bab7ad75-cb15-4910-a013-e9cafba90f73-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.230795 4727 generic.go:334] "Generic (PLEG): container finished" podID="af495843-7098-4ea5-9898-8a19dd9a0197" containerID="e1d03b69e93c7555701bb7210d9ea40ac4a6412d17bbb511efe9fc4f2222a8c6" exitCode=0 Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.230915 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerDied","Data":"e1d03b69e93c7555701bb7210d9ea40ac4a6412d17bbb511efe9fc4f2222a8c6"} Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.231014 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerStarted","Data":"59750bc7e55638f0b31208b2c7caeea05113198df2bedbc3bfe81ca123c0fefd"} Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.232942 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pjc7c_bab7ad75-cb15-4910-a013-e9cafba90f73/console/0.log" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.232997 4727 generic.go:334] "Generic (PLEG): container finished" podID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerID="3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" exitCode=2 Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.233031 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pjc7c" event={"ID":"bab7ad75-cb15-4910-a013-e9cafba90f73","Type":"ContainerDied","Data":"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87"} Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.233050 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pjc7c" event={"ID":"bab7ad75-cb15-4910-a013-e9cafba90f73","Type":"ContainerDied","Data":"929125b8b64331d2d6d391ab423a97e682d7d12d88e3ecc772238a6afa971136"} Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.233072 4727 scope.go:117] "RemoveContainer" containerID="3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.233146 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pjc7c" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.269305 4727 scope.go:117] "RemoveContainer" containerID="3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" Jan 09 10:59:19 crc kubenswrapper[4727]: E0109 10:59:19.270536 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87\": container with ID starting with 3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87 not found: ID does not exist" containerID="3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.270588 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87"} err="failed to get container status \"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87\": rpc error: code = NotFound desc = could not find container \"3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87\": container with ID starting with 3178d0a78ec0d7a697c1fb3d6641f96a02f6f9365f9f081fd3b1e0b74d5b6a87 not found: ID does not exist" Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.288964 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:59:19 crc kubenswrapper[4727]: I0109 10:59:19.295980 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-pjc7c"] Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.066399 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:20 crc kubenswrapper[4727]: E0109 10:59:20.068169 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.068254 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.068422 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" containerName="console" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.069502 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.087110 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.101802 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.101875 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.102163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.203220 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.203283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.203339 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.204180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.204960 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.226153 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") pod \"redhat-operators-4bqw8\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.401200 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.721413 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:20 crc kubenswrapper[4727]: I0109 10:59:20.869837 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab7ad75-cb15-4910-a013-e9cafba90f73" path="/var/lib/kubelet/pods/bab7ad75-cb15-4910-a013-e9cafba90f73/volumes" Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.267534 4727 generic.go:334] "Generic (PLEG): container finished" podID="af495843-7098-4ea5-9898-8a19dd9a0197" containerID="068d57e544a7f765940c2e31941f158bd5738a97c4e9fe4480c33141ea5d005e" exitCode=0 Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.267618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerDied","Data":"068d57e544a7f765940c2e31941f158bd5738a97c4e9fe4480c33141ea5d005e"} Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.272401 4727 generic.go:334] "Generic (PLEG): container finished" podID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerID="c6a09b2f72e99eb084d1a66aebd5266476f26ab76440561b2f90c12cb0e7d8e3" exitCode=0 Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.273251 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerDied","Data":"c6a09b2f72e99eb084d1a66aebd5266476f26ab76440561b2f90c12cb0e7d8e3"} Jan 09 10:59:21 crc kubenswrapper[4727]: I0109 10:59:21.273309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerStarted","Data":"a7cc6e803dcd8ccf5e35b8cab1998e2b0c7b415f7a7b149ea44d0f438b8a28f0"} Jan 09 10:59:22 crc kubenswrapper[4727]: I0109 10:59:22.285818 4727 generic.go:334] "Generic (PLEG): container finished" podID="af495843-7098-4ea5-9898-8a19dd9a0197" containerID="aacfd1be3752e14fef9f75b5b32ba897ad74216e0490cef7e76a1aeefd5da5cc" exitCode=0 Jan 09 10:59:22 crc kubenswrapper[4727]: I0109 10:59:22.285895 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerDied","Data":"aacfd1be3752e14fef9f75b5b32ba897ad74216e0490cef7e76a1aeefd5da5cc"} Jan 09 10:59:22 crc kubenswrapper[4727]: I0109 10:59:22.290595 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerStarted","Data":"3fd62562fb69160399ec84d2f73f694c38d1052013ad3187b8476690505ebefb"} Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.747172 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.897503 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") pod \"af495843-7098-4ea5-9898-8a19dd9a0197\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.897608 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") pod \"af495843-7098-4ea5-9898-8a19dd9a0197\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.897802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") pod \"af495843-7098-4ea5-9898-8a19dd9a0197\" (UID: \"af495843-7098-4ea5-9898-8a19dd9a0197\") " Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.899831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle" (OuterVolumeSpecName: "bundle") pod "af495843-7098-4ea5-9898-8a19dd9a0197" (UID: "af495843-7098-4ea5-9898-8a19dd9a0197"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.904728 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h" (OuterVolumeSpecName: "kube-api-access-nxx6h") pod "af495843-7098-4ea5-9898-8a19dd9a0197" (UID: "af495843-7098-4ea5-9898-8a19dd9a0197"). InnerVolumeSpecName "kube-api-access-nxx6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.913687 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util" (OuterVolumeSpecName: "util") pod "af495843-7098-4ea5-9898-8a19dd9a0197" (UID: "af495843-7098-4ea5-9898-8a19dd9a0197"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.999577 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.999616 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxx6h\" (UniqueName: \"kubernetes.io/projected/af495843-7098-4ea5-9898-8a19dd9a0197-kube-api-access-nxx6h\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:23 crc kubenswrapper[4727]: I0109 10:59:23.999630 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af495843-7098-4ea5-9898-8a19dd9a0197-util\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.309112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" event={"ID":"af495843-7098-4ea5-9898-8a19dd9a0197","Type":"ContainerDied","Data":"59750bc7e55638f0b31208b2c7caeea05113198df2bedbc3bfe81ca123c0fefd"} Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.309716 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59750bc7e55638f0b31208b2c7caeea05113198df2bedbc3bfe81ca123c0fefd" Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.309136 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4" Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.311911 4727 generic.go:334] "Generic (PLEG): container finished" podID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerID="3fd62562fb69160399ec84d2f73f694c38d1052013ad3187b8476690505ebefb" exitCode=0 Jan 09 10:59:24 crc kubenswrapper[4727]: I0109 10:59:24.311978 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerDied","Data":"3fd62562fb69160399ec84d2f73f694c38d1052013ad3187b8476690505ebefb"} Jan 09 10:59:25 crc kubenswrapper[4727]: I0109 10:59:25.322205 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerStarted","Data":"f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022"} Jan 09 10:59:25 crc kubenswrapper[4727]: I0109 10:59:25.344792 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bqw8" podStartSLOduration=1.671936512 podStartE2EDuration="5.344766974s" podCreationTimestamp="2026-01-09 10:59:20 +0000 UTC" firstStartedPulling="2026-01-09 10:59:21.276707343 +0000 UTC m=+806.726612124" lastFinishedPulling="2026-01-09 10:59:24.949537805 +0000 UTC m=+810.399442586" observedRunningTime="2026-01-09 10:59:25.341290772 +0000 UTC m=+810.791195573" watchObservedRunningTime="2026-01-09 10:59:25.344766974 +0000 UTC m=+810.794671765" Jan 09 10:59:30 crc kubenswrapper[4727]: I0109 10:59:30.402369 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:30 crc kubenswrapper[4727]: I0109 10:59:30.402771 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:31 crc kubenswrapper[4727]: I0109 10:59:31.444185 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bqw8" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" probeResult="failure" output=< Jan 09 10:59:31 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 10:59:31 crc kubenswrapper[4727]: > Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.092810 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228"] Jan 09 10:59:34 crc kubenswrapper[4727]: E0109 10:59:34.093658 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="util" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.093678 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="util" Jan 09 10:59:34 crc kubenswrapper[4727]: E0109 10:59:34.093705 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="pull" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.093714 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="pull" Jan 09 10:59:34 crc kubenswrapper[4727]: E0109 10:59:34.093731 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="extract" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.093739 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="extract" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.093877 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="af495843-7098-4ea5-9898-8a19dd9a0197" containerName="extract" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.094566 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.096544 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.097113 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.097431 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fdlt9" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.098963 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.102668 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.116694 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228"] Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.141636 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd9bb\" (UniqueName: \"kubernetes.io/projected/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-kube-api-access-xd9bb\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.141811 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-webhook-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.142015 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-apiservice-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.244226 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-apiservice-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.244319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd9bb\" (UniqueName: \"kubernetes.io/projected/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-kube-api-access-xd9bb\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.244360 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-webhook-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.255271 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-webhook-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.259817 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-apiservice-cert\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.266916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd9bb\" (UniqueName: \"kubernetes.io/projected/d7eb33c1-26fc-47be-8c5b-f235afa77ea8-kube-api-access-xd9bb\") pod \"metallb-operator-controller-manager-7fc8994bc9-qg228\" (UID: \"d7eb33c1-26fc-47be-8c5b-f235afa77ea8\") " pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.332580 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz"] Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.333391 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.336066 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.336200 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.337107 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ctwcm" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.349762 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz"] Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.350615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-webhook-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.350692 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmgn\" (UniqueName: \"kubernetes.io/projected/d3f738e6-a0bc-42cd-b4d8-71940837e09f-kube-api-access-psmgn\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.350794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-apiservice-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.414286 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.451886 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-webhook-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.452238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psmgn\" (UniqueName: \"kubernetes.io/projected/d3f738e6-a0bc-42cd-b4d8-71940837e09f-kube-api-access-psmgn\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.452298 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-apiservice-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.460816 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-apiservice-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.464440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3f738e6-a0bc-42cd-b4d8-71940837e09f-webhook-cert\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.477385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psmgn\" (UniqueName: \"kubernetes.io/projected/d3f738e6-a0bc-42cd-b4d8-71940837e09f-kube-api-access-psmgn\") pod \"metallb-operator-webhook-server-6c5db45976-lnrnz\" (UID: \"d3f738e6-a0bc-42cd-b4d8-71940837e09f\") " pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.648759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:34 crc kubenswrapper[4727]: I0109 10:59:34.759112 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228"] Jan 09 10:59:35 crc kubenswrapper[4727]: I0109 10:59:35.385664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" event={"ID":"d7eb33c1-26fc-47be-8c5b-f235afa77ea8","Type":"ContainerStarted","Data":"7f48e767fabdfa06cafbe5d850a392bb64a1f11f8d46a5008f86e739de73024a"} Jan 09 10:59:35 crc kubenswrapper[4727]: I0109 10:59:35.489223 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz"] Jan 09 10:59:35 crc kubenswrapper[4727]: W0109 10:59:35.496639 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3f738e6_a0bc_42cd_b4d8_71940837e09f.slice/crio-6e5686c38d3e6fd8d976856b7f7f785ef705caf86cc28c0aab032519fe0c32f4 WatchSource:0}: Error finding container 6e5686c38d3e6fd8d976856b7f7f785ef705caf86cc28c0aab032519fe0c32f4: Status 404 returned error can't find the container with id 6e5686c38d3e6fd8d976856b7f7f785ef705caf86cc28c0aab032519fe0c32f4 Jan 09 10:59:36 crc kubenswrapper[4727]: I0109 10:59:36.399056 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" event={"ID":"d3f738e6-a0bc-42cd-b4d8-71940837e09f","Type":"ContainerStarted","Data":"6e5686c38d3e6fd8d976856b7f7f785ef705caf86cc28c0aab032519fe0c32f4"} Jan 09 10:59:40 crc kubenswrapper[4727]: I0109 10:59:40.502643 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:40 crc kubenswrapper[4727]: I0109 10:59:40.556277 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:40 crc kubenswrapper[4727]: I0109 10:59:40.734352 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:42 crc kubenswrapper[4727]: I0109 10:59:42.445035 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bqw8" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" containerID="cri-o://f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022" gracePeriod=2 Jan 09 10:59:43 crc kubenswrapper[4727]: I0109 10:59:43.454894 4727 generic.go:334] "Generic (PLEG): container finished" podID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerID="f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022" exitCode=0 Jan 09 10:59:43 crc kubenswrapper[4727]: I0109 10:59:43.454945 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerDied","Data":"f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022"} Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.744883 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.942759 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") pod \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.942938 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") pod \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.942983 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") pod \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\" (UID: \"42a2f991-4bd0-4eba-84c9-e5020d40afd0\") " Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.944111 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities" (OuterVolumeSpecName: "utilities") pod "42a2f991-4bd0-4eba-84c9-e5020d40afd0" (UID: "42a2f991-4bd0-4eba-84c9-e5020d40afd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:59:45 crc kubenswrapper[4727]: I0109 10:59:45.949332 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878" (OuterVolumeSpecName: "kube-api-access-z7878") pod "42a2f991-4bd0-4eba-84c9-e5020d40afd0" (UID: "42a2f991-4bd0-4eba-84c9-e5020d40afd0"). InnerVolumeSpecName "kube-api-access-z7878". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.045075 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7878\" (UniqueName: \"kubernetes.io/projected/42a2f991-4bd0-4eba-84c9-e5020d40afd0-kube-api-access-z7878\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.045579 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.058802 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42a2f991-4bd0-4eba-84c9-e5020d40afd0" (UID: "42a2f991-4bd0-4eba-84c9-e5020d40afd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.155646 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a2f991-4bd0-4eba-84c9-e5020d40afd0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.478847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bqw8" event={"ID":"42a2f991-4bd0-4eba-84c9-e5020d40afd0","Type":"ContainerDied","Data":"a7cc6e803dcd8ccf5e35b8cab1998e2b0c7b415f7a7b149ea44d0f438b8a28f0"} Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.478926 4727 scope.go:117] "RemoveContainer" containerID="f0aaa3544259963139705cc2f3728c9180c3257a5d515bb3594136f5ebbce022" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.479560 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bqw8" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.480974 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" event={"ID":"d3f738e6-a0bc-42cd-b4d8-71940837e09f","Type":"ContainerStarted","Data":"496ac14fde94ebfa73edd4f4f740ba85472ce45fa93725992cbad4c2b32d953c"} Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.481167 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.484308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" event={"ID":"d7eb33c1-26fc-47be-8c5b-f235afa77ea8","Type":"ContainerStarted","Data":"1d86bebf950d90185802e82ee5f4f149cdf9b7c09897138859c5e53f5330d4a8"} Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.484589 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.500814 4727 scope.go:117] "RemoveContainer" containerID="3fd62562fb69160399ec84d2f73f694c38d1052013ad3187b8476690505ebefb" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.509863 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" podStartSLOduration=2.647153178 podStartE2EDuration="12.509844305s" podCreationTimestamp="2026-01-09 10:59:34 +0000 UTC" firstStartedPulling="2026-01-09 10:59:35.500132894 +0000 UTC m=+820.950037675" lastFinishedPulling="2026-01-09 10:59:45.362824021 +0000 UTC m=+830.812728802" observedRunningTime="2026-01-09 10:59:46.508663123 +0000 UTC m=+831.958567914" watchObservedRunningTime="2026-01-09 10:59:46.509844305 +0000 UTC m=+831.959749086" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.520180 4727 scope.go:117] "RemoveContainer" containerID="c6a09b2f72e99eb084d1a66aebd5266476f26ab76440561b2f90c12cb0e7d8e3" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.551955 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" podStartSLOduration=2.031107409 podStartE2EDuration="12.55193217s" podCreationTimestamp="2026-01-09 10:59:34 +0000 UTC" firstStartedPulling="2026-01-09 10:59:34.815494088 +0000 UTC m=+820.265398869" lastFinishedPulling="2026-01-09 10:59:45.336318849 +0000 UTC m=+830.786223630" observedRunningTime="2026-01-09 10:59:46.549465414 +0000 UTC m=+831.999370195" watchObservedRunningTime="2026-01-09 10:59:46.55193217 +0000 UTC m=+832.001836951" Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.574351 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.577646 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bqw8"] Jan 09 10:59:46 crc kubenswrapper[4727]: I0109 10:59:46.869212 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" path="/var/lib/kubelet/pods/42a2f991-4bd0-4eba-84c9-e5020d40afd0/volumes" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.167014 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:00:00 crc kubenswrapper[4727]: E0109 11:00:00.168086 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168105 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" Jan 09 11:00:00 crc kubenswrapper[4727]: E0109 11:00:00.168123 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="extract-utilities" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168131 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="extract-utilities" Jan 09 11:00:00 crc kubenswrapper[4727]: E0109 11:00:00.168153 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="extract-content" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168160 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="extract-content" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168278 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a2f991-4bd0-4eba-84c9-e5020d40afd0" containerName="registry-server" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.168852 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.171618 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.172266 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.180544 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.272320 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.272432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.272780 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.374741 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.374822 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.374898 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.376116 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.384010 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.396262 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") pod \"collect-profiles-29465940-546ww\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.535436 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:00 crc kubenswrapper[4727]: I0109 11:00:00.766028 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:00:01 crc kubenswrapper[4727]: I0109 11:00:01.596379 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4efe522-b8d6-44a6-a75b-7cb19f528323" containerID="b65ad815096d70648fb353956b9ad150a228f000450b80449e7948a4c212e007" exitCode=0 Jan 09 11:00:01 crc kubenswrapper[4727]: I0109 11:00:01.596433 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" event={"ID":"f4efe522-b8d6-44a6-a75b-7cb19f528323","Type":"ContainerDied","Data":"b65ad815096d70648fb353956b9ad150a228f000450b80449e7948a4c212e007"} Jan 09 11:00:01 crc kubenswrapper[4727]: I0109 11:00:01.596802 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" event={"ID":"f4efe522-b8d6-44a6-a75b-7cb19f528323","Type":"ContainerStarted","Data":"5fe88049c8f2c821430aca9ca2c095bd9ba8fbc6dae83b7bcb00ee8b9437fa34"} Jan 09 11:00:02 crc kubenswrapper[4727]: I0109 11:00:02.834929 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.009962 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") pod \"f4efe522-b8d6-44a6-a75b-7cb19f528323\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.010191 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") pod \"f4efe522-b8d6-44a6-a75b-7cb19f528323\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.010225 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") pod \"f4efe522-b8d6-44a6-a75b-7cb19f528323\" (UID: \"f4efe522-b8d6-44a6-a75b-7cb19f528323\") " Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.011428 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume" (OuterVolumeSpecName: "config-volume") pod "f4efe522-b8d6-44a6-a75b-7cb19f528323" (UID: "f4efe522-b8d6-44a6-a75b-7cb19f528323"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.017796 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f4efe522-b8d6-44a6-a75b-7cb19f528323" (UID: "f4efe522-b8d6-44a6-a75b-7cb19f528323"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.018395 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4" (OuterVolumeSpecName: "kube-api-access-lvdh4") pod "f4efe522-b8d6-44a6-a75b-7cb19f528323" (UID: "f4efe522-b8d6-44a6-a75b-7cb19f528323"). InnerVolumeSpecName "kube-api-access-lvdh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.112111 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4efe522-b8d6-44a6-a75b-7cb19f528323-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.112180 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4efe522-b8d6-44a6-a75b-7cb19f528323-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.112202 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvdh4\" (UniqueName: \"kubernetes.io/projected/f4efe522-b8d6-44a6-a75b-7cb19f528323-kube-api-access-lvdh4\") on node \"crc\" DevicePath \"\"" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.614176 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.614092 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww" event={"ID":"f4efe522-b8d6-44a6-a75b-7cb19f528323","Type":"ContainerDied","Data":"5fe88049c8f2c821430aca9ca2c095bd9ba8fbc6dae83b7bcb00ee8b9437fa34"} Jan 09 11:00:03 crc kubenswrapper[4727]: I0109 11:00:03.614356 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe88049c8f2c821430aca9ca2c095bd9ba8fbc6dae83b7bcb00ee8b9437fa34" Jan 09 11:00:04 crc kubenswrapper[4727]: I0109 11:00:04.654655 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c5db45976-lnrnz" Jan 09 11:00:24 crc kubenswrapper[4727]: I0109 11:00:24.420603 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7fc8994bc9-qg228" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.250925 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xvvzt"] Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.251327 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4efe522-b8d6-44a6-a75b-7cb19f528323" containerName="collect-profiles" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.251351 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4efe522-b8d6-44a6-a75b-7cb19f528323" containerName="collect-profiles" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.251482 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4efe522-b8d6-44a6-a75b-7cb19f528323" containerName="collect-profiles" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.253710 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.256164 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.257429 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.258124 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lvktz" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.260025 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.262226 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.263539 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.277205 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281603 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-startup\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-reloader\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281716 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281781 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkb7s\" (UniqueName: \"kubernetes.io/projected/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-kube-api-access-qkb7s\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-sockets\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.281939 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics-certs\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.282023 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.282058 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9mt\" (UniqueName: \"kubernetes.io/projected/e9d515de-9700-4c41-97f0-317214f0a7bb-kube-api-access-2j9mt\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.282090 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-conf\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.350829 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ls2r2"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.351737 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.354939 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.354948 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.355174 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-gmz4p" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.356114 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.365671 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-ljds2"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.368073 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.370253 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkb7s\" (UniqueName: \"kubernetes.io/projected/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-kube-api-access-qkb7s\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383405 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-sockets\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383442 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb75e8-9dff-48d1-952b-a07637adfceb-metallb-excludel2\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383472 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics-certs\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383491 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb84t\" (UniqueName: \"kubernetes.io/projected/8ffb75e8-9dff-48d1-952b-a07637adfceb-kube-api-access-rb84t\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383535 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383562 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9mt\" (UniqueName: \"kubernetes.io/projected/e9d515de-9700-4c41-97f0-317214f0a7bb-kube-api-access-2j9mt\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383660 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383714 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-conf\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-startup\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-reloader\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-cert\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.383913 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384019 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384034 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-sockets\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9b7z\" (UniqueName: \"kubernetes.io/projected/da86c323-c171-499f-8e25-74532f7c1fca-kube-api-access-n9b7z\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384126 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384148 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-conf\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384148 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.384259 4727 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.384316 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert podName:ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee nodeName:}" failed. No retries permitted until 2026-01-09 11:00:25.88429719 +0000 UTC m=+871.334201971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert") pod "frr-k8s-webhook-server-7784b6fcf-6msbv" (UID: "ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee") : secret "frr-k8s-webhook-server-cert" not found Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384316 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e9d515de-9700-4c41-97f0-317214f0a7bb-reloader\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.384908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e9d515de-9700-4c41-97f0-317214f0a7bb-frr-startup\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.390586 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-ljds2"] Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.407238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9d515de-9700-4c41-97f0-317214f0a7bb-metrics-certs\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.418406 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkb7s\" (UniqueName: \"kubernetes.io/projected/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-kube-api-access-qkb7s\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.445653 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9mt\" (UniqueName: \"kubernetes.io/projected/e9d515de-9700-4c41-97f0-317214f0a7bb-kube-api-access-2j9mt\") pod \"frr-k8s-xvvzt\" (UID: \"e9d515de-9700-4c41-97f0-317214f0a7bb\") " pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485301 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-cert\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485408 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485436 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9b7z\" (UniqueName: \"kubernetes.io/projected/da86c323-c171-499f-8e25-74532f7c1fca-kube-api-access-n9b7z\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485499 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb75e8-9dff-48d1-952b-a07637adfceb-metallb-excludel2\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb84t\" (UniqueName: \"kubernetes.io/projected/8ffb75e8-9dff-48d1-952b-a07637adfceb-kube-api-access-rb84t\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485569 4727 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.485595 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485660 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist podName:8ffb75e8-9dff-48d1-952b-a07637adfceb nodeName:}" failed. No retries permitted until 2026-01-09 11:00:25.985635511 +0000 UTC m=+871.435540292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist") pod "speaker-ls2r2" (UID: "8ffb75e8-9dff-48d1-952b-a07637adfceb") : secret "metallb-memberlist" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485733 4727 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485732 4727 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485806 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs podName:8ffb75e8-9dff-48d1-952b-a07637adfceb nodeName:}" failed. No retries permitted until 2026-01-09 11:00:25.985782535 +0000 UTC m=+871.435687316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs") pod "speaker-ls2r2" (UID: "8ffb75e8-9dff-48d1-952b-a07637adfceb") : secret "speaker-certs-secret" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.485826 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs podName:da86c323-c171-499f-8e25-74532f7c1fca nodeName:}" failed. No retries permitted until 2026-01-09 11:00:25.985815976 +0000 UTC m=+871.435720757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs") pod "controller-5bddd4b946-ljds2" (UID: "da86c323-c171-499f-8e25-74532f7c1fca") : secret "controller-certs-secret" not found Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.486449 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8ffb75e8-9dff-48d1-952b-a07637adfceb-metallb-excludel2\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.489803 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.516237 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-cert\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.520571 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9b7z\" (UniqueName: \"kubernetes.io/projected/da86c323-c171-499f-8e25-74532f7c1fca-kube-api-access-n9b7z\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.534360 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb84t\" (UniqueName: \"kubernetes.io/projected/8ffb75e8-9dff-48d1-952b-a07637adfceb-kube-api-access-rb84t\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.584650 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.802293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"efd4c49e809e6c226292509c4ab0ab548700c565e167915c92891158d9076137"} Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.891879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.898430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-6msbv\" (UID: \"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.997686 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.997840 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:25 crc kubenswrapper[4727]: I0109 11:00:25.997878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.998980 4727 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 09 11:00:25 crc kubenswrapper[4727]: E0109 11:00:25.999088 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist podName:8ffb75e8-9dff-48d1-952b-a07637adfceb nodeName:}" failed. No retries permitted until 2026-01-09 11:00:26.999045889 +0000 UTC m=+872.448950680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist") pod "speaker-ls2r2" (UID: "8ffb75e8-9dff-48d1-952b-a07637adfceb") : secret "metallb-memberlist" not found Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.001647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/da86c323-c171-499f-8e25-74532f7c1fca-metrics-certs\") pod \"controller-5bddd4b946-ljds2\" (UID: \"da86c323-c171-499f-8e25-74532f7c1fca\") " pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.001858 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-metrics-certs\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.197320 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.287583 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.496002 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-ljds2"] Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.654590 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv"] Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.811371 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-ljds2" event={"ID":"da86c323-c171-499f-8e25-74532f7c1fca","Type":"ContainerStarted","Data":"67f5223dee7e4ce371ce2b4f2734ebcec5a20080c1da731465e5444d898377c7"} Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.811426 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-ljds2" event={"ID":"da86c323-c171-499f-8e25-74532f7c1fca","Type":"ContainerStarted","Data":"bfd5b5a5bd0846086d6f863108057ac49770c8d9bd85e84acc04ffa17c7f9637"} Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.811439 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-ljds2" event={"ID":"da86c323-c171-499f-8e25-74532f7c1fca","Type":"ContainerStarted","Data":"83766bc7e4255f41827f3779589be93806699bdb1962a65bab36119f0b5e8ec2"} Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.811571 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.817401 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" event={"ID":"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee","Type":"ContainerStarted","Data":"8bd8725a9cb6287049f726a46d371b42b138b6fd40b2a1a5e4ced8cd2a7fa877"} Jan 09 11:00:26 crc kubenswrapper[4727]: I0109 11:00:26.834737 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-ljds2" podStartSLOduration=1.834706213 podStartE2EDuration="1.834706213s" podCreationTimestamp="2026-01-09 11:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:00:26.828992379 +0000 UTC m=+872.278897170" watchObservedRunningTime="2026-01-09 11:00:26.834706213 +0000 UTC m=+872.284611034" Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.014694 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.023488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8ffb75e8-9dff-48d1-952b-a07637adfceb-memberlist\") pod \"speaker-ls2r2\" (UID: \"8ffb75e8-9dff-48d1-952b-a07637adfceb\") " pod="metallb-system/speaker-ls2r2" Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.169890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ls2r2" Jan 09 11:00:27 crc kubenswrapper[4727]: W0109 11:00:27.206962 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ffb75e8_9dff_48d1_952b_a07637adfceb.slice/crio-92a5ca7de3e277e026106d69fc9cea7131c4020fa876f2382b0c47c35c212b55 WatchSource:0}: Error finding container 92a5ca7de3e277e026106d69fc9cea7131c4020fa876f2382b0c47c35c212b55: Status 404 returned error can't find the container with id 92a5ca7de3e277e026106d69fc9cea7131c4020fa876f2382b0c47c35c212b55 Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.834421 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ls2r2" event={"ID":"8ffb75e8-9dff-48d1-952b-a07637adfceb","Type":"ContainerStarted","Data":"a2184598e0014888261a5ae6fffb04fa04a45f0f9ff1fa8a2fef4373e7d5b9ad"} Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.834755 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ls2r2" event={"ID":"8ffb75e8-9dff-48d1-952b-a07637adfceb","Type":"ContainerStarted","Data":"af295ea6cbca9efecfee2835c5c02cb6feabccb976cc7af0cf98636fa5f0298f"} Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.834768 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ls2r2" event={"ID":"8ffb75e8-9dff-48d1-952b-a07637adfceb","Type":"ContainerStarted","Data":"92a5ca7de3e277e026106d69fc9cea7131c4020fa876f2382b0c47c35c212b55"} Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.834992 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ls2r2" Jan 09 11:00:27 crc kubenswrapper[4727]: I0109 11:00:27.872266 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ls2r2" podStartSLOduration=2.8722486480000002 podStartE2EDuration="2.872248648s" podCreationTimestamp="2026-01-09 11:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:00:27.867459878 +0000 UTC m=+873.317364659" watchObservedRunningTime="2026-01-09 11:00:27.872248648 +0000 UTC m=+873.322153429" Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.887030 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" event={"ID":"ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee","Type":"ContainerStarted","Data":"114603ab77cc97784a303c62f913ae683a3cdfd182ea0890b943084f6ebeec0a"} Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.887723 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.894218 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9d515de-9700-4c41-97f0-317214f0a7bb" containerID="336cfe712fca2d662d2ed36f9c76bb33ff8d3f0bddb65fcea4e9b55d1bea319b" exitCode=0 Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.895702 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerDied","Data":"336cfe712fca2d662d2ed36f9c76bb33ff8d3f0bddb65fcea4e9b55d1bea319b"} Jan 09 11:00:34 crc kubenswrapper[4727]: I0109 11:00:34.916311 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" podStartSLOduration=2.571172402 podStartE2EDuration="9.916285345s" podCreationTimestamp="2026-01-09 11:00:25 +0000 UTC" firstStartedPulling="2026-01-09 11:00:26.658829982 +0000 UTC m=+872.108734763" lastFinishedPulling="2026-01-09 11:00:34.003942925 +0000 UTC m=+879.453847706" observedRunningTime="2026-01-09 11:00:34.916263855 +0000 UTC m=+880.366168656" watchObservedRunningTime="2026-01-09 11:00:34.916285345 +0000 UTC m=+880.366190126" Jan 09 11:00:35 crc kubenswrapper[4727]: I0109 11:00:35.903076 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9d515de-9700-4c41-97f0-317214f0a7bb" containerID="7d31d2b0dcba99d43aeb055586eb04e04a9fe8526a5e31dd79fd2c79733bb673" exitCode=0 Jan 09 11:00:35 crc kubenswrapper[4727]: I0109 11:00:35.903205 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerDied","Data":"7d31d2b0dcba99d43aeb055586eb04e04a9fe8526a5e31dd79fd2c79733bb673"} Jan 09 11:00:36 crc kubenswrapper[4727]: I0109 11:00:36.291608 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-ljds2" Jan 09 11:00:36 crc kubenswrapper[4727]: I0109 11:00:36.911705 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9d515de-9700-4c41-97f0-317214f0a7bb" containerID="bdc6946f96e0e7dcf5dbb1e4a5c63b96a5a0e85b3e34acd19aef89c6aaf0797f" exitCode=0 Jan 09 11:00:36 crc kubenswrapper[4727]: I0109 11:00:36.911752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerDied","Data":"bdc6946f96e0e7dcf5dbb1e4a5c63b96a5a0e85b3e34acd19aef89c6aaf0797f"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.173906 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ls2r2" Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.932158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"e987960defdc214ea336ca385c3cdae993f275154fc21c88469e51440955ef47"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.932960 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"154298782c72eff5b329379a1df10c8f873e06e3df6e3cfd41c4e099ce2dcfaf"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.932976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"21abf60c1c460debbfd2b2e62734c491f1703f84fc29fd284b01fc041c120fff"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.932990 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"92927aefb40e1e389ba422fd9a2856038683fc577889db6b247a0cacda12dbd9"} Jan 09 11:00:37 crc kubenswrapper[4727]: I0109 11:00:37.933002 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"31ed0c4e67371ff5f51eee6194b44961ad944c25322548eb4d4668475a3881e2"} Jan 09 11:00:38 crc kubenswrapper[4727]: I0109 11:00:38.960378 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xvvzt" event={"ID":"e9d515de-9700-4c41-97f0-317214f0a7bb","Type":"ContainerStarted","Data":"8e02e46b8c7432be7961a22e210442f72753127391562bc40579f00699015f7b"} Jan 09 11:00:38 crc kubenswrapper[4727]: I0109 11:00:38.961623 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.480569 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xvvzt" podStartSLOduration=7.267998937 podStartE2EDuration="15.480494748s" podCreationTimestamp="2026-01-09 11:00:25 +0000 UTC" firstStartedPulling="2026-01-09 11:00:25.775227466 +0000 UTC m=+871.225132247" lastFinishedPulling="2026-01-09 11:00:33.987723257 +0000 UTC m=+879.437628058" observedRunningTime="2026-01-09 11:00:38.989162762 +0000 UTC m=+884.439067553" watchObservedRunningTime="2026-01-09 11:00:40.480494748 +0000 UTC m=+885.930399589" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.488468 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.489577 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.492517 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.495929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7c9d7" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.496933 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.499786 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.536787 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") pod \"openstack-operator-index-8pfvp\" (UID: \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\") " pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.585590 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.638483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") pod \"openstack-operator-index-8pfvp\" (UID: \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\") " pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.642383 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.666033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") pod \"openstack-operator-index-8pfvp\" (UID: \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\") " pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:40 crc kubenswrapper[4727]: I0109 11:00:40.855428 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:41 crc kubenswrapper[4727]: I0109 11:00:41.092492 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:41 crc kubenswrapper[4727]: I0109 11:00:41.982791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8pfvp" event={"ID":"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2","Type":"ContainerStarted","Data":"feca25ae368a63dde2a5507266735a1dc7c994fd1e913e4707c207605e844510"} Jan 09 11:00:43 crc kubenswrapper[4727]: I0109 11:00:43.850469 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:43 crc kubenswrapper[4727]: I0109 11:00:43.999799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8pfvp" event={"ID":"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2","Type":"ContainerStarted","Data":"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc"} Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.027212 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8pfvp" podStartSLOduration=1.898077695 podStartE2EDuration="4.027185281s" podCreationTimestamp="2026-01-09 11:00:40 +0000 UTC" firstStartedPulling="2026-01-09 11:00:41.104132086 +0000 UTC m=+886.554036857" lastFinishedPulling="2026-01-09 11:00:43.233239662 +0000 UTC m=+888.683144443" observedRunningTime="2026-01-09 11:00:44.020374258 +0000 UTC m=+889.470279049" watchObservedRunningTime="2026-01-09 11:00:44.027185281 +0000 UTC m=+889.477090072" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.458120 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cj5kr"] Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.460052 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.469320 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cj5kr"] Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.502257 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229km\" (UniqueName: \"kubernetes.io/projected/26bfbd30-40a2-466a-862d-6cdf25911f85-kube-api-access-229km\") pod \"openstack-operator-index-cj5kr\" (UID: \"26bfbd30-40a2-466a-862d-6cdf25911f85\") " pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.603614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229km\" (UniqueName: \"kubernetes.io/projected/26bfbd30-40a2-466a-862d-6cdf25911f85-kube-api-access-229km\") pod \"openstack-operator-index-cj5kr\" (UID: \"26bfbd30-40a2-466a-862d-6cdf25911f85\") " pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.630938 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229km\" (UniqueName: \"kubernetes.io/projected/26bfbd30-40a2-466a-862d-6cdf25911f85-kube-api-access-229km\") pod \"openstack-operator-index-cj5kr\" (UID: \"26bfbd30-40a2-466a-862d-6cdf25911f85\") " pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:44 crc kubenswrapper[4727]: I0109 11:00:44.824282 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.009386 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-8pfvp" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerName="registry-server" containerID="cri-o://316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" gracePeriod=2 Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.096133 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cj5kr"] Jan 09 11:00:45 crc kubenswrapper[4727]: W0109 11:00:45.109991 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26bfbd30_40a2_466a_862d_6cdf25911f85.slice/crio-c7c32912e063b8ba8b26dcec7abc7feef736b593075108b87db88d9d0e9cf860 WatchSource:0}: Error finding container c7c32912e063b8ba8b26dcec7abc7feef736b593075108b87db88d9d0e9cf860: Status 404 returned error can't find the container with id c7c32912e063b8ba8b26dcec7abc7feef736b593075108b87db88d9d0e9cf860 Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.331000 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.412591 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") pod \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\" (UID: \"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2\") " Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.419049 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx" (OuterVolumeSpecName: "kube-api-access-992xx") pod "6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" (UID: "6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2"). InnerVolumeSpecName "kube-api-access-992xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:00:45 crc kubenswrapper[4727]: I0109 11:00:45.514357 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-992xx\" (UniqueName: \"kubernetes.io/projected/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2-kube-api-access-992xx\") on node \"crc\" DevicePath \"\"" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.018568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cj5kr" event={"ID":"26bfbd30-40a2-466a-862d-6cdf25911f85","Type":"ContainerStarted","Data":"37dd237b8519cd9fa72e3ff3fe52e570212af7f2614e8cd49820095b682e3f8a"} Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.019001 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cj5kr" event={"ID":"26bfbd30-40a2-466a-862d-6cdf25911f85","Type":"ContainerStarted","Data":"c7c32912e063b8ba8b26dcec7abc7feef736b593075108b87db88d9d0e9cf860"} Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.019968 4727 generic.go:334] "Generic (PLEG): container finished" podID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerID="316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" exitCode=0 Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.020003 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8pfvp" event={"ID":"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2","Type":"ContainerDied","Data":"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc"} Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.020019 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8pfvp" event={"ID":"6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2","Type":"ContainerDied","Data":"feca25ae368a63dde2a5507266735a1dc7c994fd1e913e4707c207605e844510"} Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.020037 4727 scope.go:117] "RemoveContainer" containerID="316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.020138 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8pfvp" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.045667 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cj5kr" podStartSLOduration=1.987715713 podStartE2EDuration="2.045642594s" podCreationTimestamp="2026-01-09 11:00:44 +0000 UTC" firstStartedPulling="2026-01-09 11:00:45.114833947 +0000 UTC m=+890.564738728" lastFinishedPulling="2026-01-09 11:00:45.172760828 +0000 UTC m=+890.622665609" observedRunningTime="2026-01-09 11:00:46.040435324 +0000 UTC m=+891.490340135" watchObservedRunningTime="2026-01-09 11:00:46.045642594 +0000 UTC m=+891.495547415" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.052254 4727 scope.go:117] "RemoveContainer" containerID="316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" Jan 09 11:00:46 crc kubenswrapper[4727]: E0109 11:00:46.053206 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc\": container with ID starting with 316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc not found: ID does not exist" containerID="316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.053266 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc"} err="failed to get container status \"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc\": rpc error: code = NotFound desc = could not find container \"316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc\": container with ID starting with 316a4d727f5d020ccd4f6c101e6edd17394ec365c4527ae4ecd7e65db40665cc not found: ID does not exist" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.077820 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.083383 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-8pfvp"] Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.205502 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-6msbv" Jan 09 11:00:46 crc kubenswrapper[4727]: I0109 11:00:46.912334 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" path="/var/lib/kubelet/pods/6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2/volumes" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.669813 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:00:51 crc kubenswrapper[4727]: E0109 11:00:51.671314 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerName="registry-server" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.671434 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerName="registry-server" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.671794 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fdaabd5-8751-4a4b-aa40-0e1daac5c1b2" containerName="registry-server" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.673765 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.687162 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.711276 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.711599 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.711735 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813716 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.813804 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:51 crc kubenswrapper[4727]: I0109 11:00:51.847564 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") pod \"redhat-marketplace-gfmgm\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:52 crc kubenswrapper[4727]: I0109 11:00:52.003097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:00:52 crc kubenswrapper[4727]: I0109 11:00:52.483974 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:00:52 crc kubenswrapper[4727]: W0109 11:00:52.493750 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8603c28d_35ff_45d9_a606_1ebc68271a2c.slice/crio-7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b WatchSource:0}: Error finding container 7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b: Status 404 returned error can't find the container with id 7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b Jan 09 11:00:53 crc kubenswrapper[4727]: I0109 11:00:53.077786 4727 generic.go:334] "Generic (PLEG): container finished" podID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerID="504edafdc06587cedd1404b889f20e4ee1038b1c7d57904249507ee37b13d657" exitCode=0 Jan 09 11:00:53 crc kubenswrapper[4727]: I0109 11:00:53.078111 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerDied","Data":"504edafdc06587cedd1404b889f20e4ee1038b1c7d57904249507ee37b13d657"} Jan 09 11:00:53 crc kubenswrapper[4727]: I0109 11:00:53.078133 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerStarted","Data":"7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b"} Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.090719 4727 generic.go:334] "Generic (PLEG): container finished" podID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerID="ff22aa3eacf371747748cec36311e35c2c5ebb77ee5b07f9cd43b5e1f320411e" exitCode=0 Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.090810 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerDied","Data":"ff22aa3eacf371747748cec36311e35c2c5ebb77ee5b07f9cd43b5e1f320411e"} Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.824804 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.825310 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:54 crc kubenswrapper[4727]: I0109 11:00:54.875014 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:55 crc kubenswrapper[4727]: I0109 11:00:55.100182 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerStarted","Data":"f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007"} Jan 09 11:00:55 crc kubenswrapper[4727]: I0109 11:00:55.123449 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gfmgm" podStartSLOduration=2.471561655 podStartE2EDuration="4.123420818s" podCreationTimestamp="2026-01-09 11:00:51 +0000 UTC" firstStartedPulling="2026-01-09 11:00:53.080616358 +0000 UTC m=+898.530521139" lastFinishedPulling="2026-01-09 11:00:54.732475521 +0000 UTC m=+900.182380302" observedRunningTime="2026-01-09 11:00:55.121173957 +0000 UTC m=+900.571078748" watchObservedRunningTime="2026-01-09 11:00:55.123420818 +0000 UTC m=+900.573325599" Jan 09 11:00:55 crc kubenswrapper[4727]: I0109 11:00:55.133292 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cj5kr" Jan 09 11:00:55 crc kubenswrapper[4727]: I0109 11:00:55.587783 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xvvzt" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.088814 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm"] Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.090869 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.093443 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-6c6hf" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.099670 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm"] Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.197121 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.197193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.197318 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.298978 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.299247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.299286 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.299801 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.299929 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.324719 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") pod \"e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.417613 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:00:57 crc kubenswrapper[4727]: I0109 11:00:57.700361 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm"] Jan 09 11:00:58 crc kubenswrapper[4727]: I0109 11:00:58.124607 4727 generic.go:334] "Generic (PLEG): container finished" podID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerID="6fa068476cee2339b8cf13515c203667c382610a59e33b81d3a4d3d3a0a10e1d" exitCode=0 Jan 09 11:00:58 crc kubenswrapper[4727]: I0109 11:00:58.124654 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerDied","Data":"6fa068476cee2339b8cf13515c203667c382610a59e33b81d3a4d3d3a0a10e1d"} Jan 09 11:00:58 crc kubenswrapper[4727]: I0109 11:00:58.124682 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerStarted","Data":"65c603da891a75683a11d72a5c18f4a1e62955299536678ba847e5ae68334ccc"} Jan 09 11:00:59 crc kubenswrapper[4727]: I0109 11:00:59.136394 4727 generic.go:334] "Generic (PLEG): container finished" podID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerID="2e5abe507d1bbb2278e130fe556066b3ca098731f09074008cfa2cb6203d1837" exitCode=0 Jan 09 11:00:59 crc kubenswrapper[4727]: I0109 11:00:59.136473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerDied","Data":"2e5abe507d1bbb2278e130fe556066b3ca098731f09074008cfa2cb6203d1837"} Jan 09 11:01:00 crc kubenswrapper[4727]: I0109 11:01:00.150101 4727 generic.go:334] "Generic (PLEG): container finished" podID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerID="fc411ebf90e64855159414d6ca76004d86cccdb5e3fc47200985a19b18737320" exitCode=0 Jan 09 11:01:00 crc kubenswrapper[4727]: I0109 11:01:00.150161 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerDied","Data":"fc411ebf90e64855159414d6ca76004d86cccdb5e3fc47200985a19b18737320"} Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.449077 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.562378 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") pod \"7624e855-2440-4a5a-8905-5e4e7c76a36c\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.562482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") pod \"7624e855-2440-4a5a-8905-5e4e7c76a36c\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.562654 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") pod \"7624e855-2440-4a5a-8905-5e4e7c76a36c\" (UID: \"7624e855-2440-4a5a-8905-5e4e7c76a36c\") " Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.563476 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle" (OuterVolumeSpecName: "bundle") pod "7624e855-2440-4a5a-8905-5e4e7c76a36c" (UID: "7624e855-2440-4a5a-8905-5e4e7c76a36c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.573094 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2" (OuterVolumeSpecName: "kube-api-access-p62x2") pod "7624e855-2440-4a5a-8905-5e4e7c76a36c" (UID: "7624e855-2440-4a5a-8905-5e4e7c76a36c"). InnerVolumeSpecName "kube-api-access-p62x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.585348 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util" (OuterVolumeSpecName: "util") pod "7624e855-2440-4a5a-8905-5e4e7c76a36c" (UID: "7624e855-2440-4a5a-8905-5e4e7c76a36c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.665171 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.665361 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p62x2\" (UniqueName: \"kubernetes.io/projected/7624e855-2440-4a5a-8905-5e4e7c76a36c-kube-api-access-p62x2\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:01 crc kubenswrapper[4727]: I0109 11:01:01.665374 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7624e855-2440-4a5a-8905-5e4e7c76a36c-util\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.004747 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.005333 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.071243 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.166414 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" event={"ID":"7624e855-2440-4a5a-8905-5e4e7c76a36c","Type":"ContainerDied","Data":"65c603da891a75683a11d72a5c18f4a1e62955299536678ba847e5ae68334ccc"} Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.166465 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c603da891a75683a11d72a5c18f4a1e62955299536678ba847e5ae68334ccc" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.166841 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm" Jan 09 11:01:02 crc kubenswrapper[4727]: I0109 11:01:02.209021 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108519 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c"] Jan 09 11:01:04 crc kubenswrapper[4727]: E0109 11:01:04.108781 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="extract" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108793 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="extract" Jan 09 11:01:04 crc kubenswrapper[4727]: E0109 11:01:04.108808 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="pull" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108813 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="pull" Jan 09 11:01:04 crc kubenswrapper[4727]: E0109 11:01:04.108828 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="util" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108834 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="util" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.108950 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7624e855-2440-4a5a-8905-5e4e7c76a36c" containerName="extract" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.109362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.112798 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-8bvpn" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.135964 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c"] Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.311138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582qb\" (UniqueName: \"kubernetes.io/projected/f749f148-ae4b-475b-90d9-1028d134d57c-kube-api-access-582qb\") pod \"openstack-operator-controller-operator-75c59d454f-d829c\" (UID: \"f749f148-ae4b-475b-90d9-1028d134d57c\") " pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.412664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-582qb\" (UniqueName: \"kubernetes.io/projected/f749f148-ae4b-475b-90d9-1028d134d57c-kube-api-access-582qb\") pod \"openstack-operator-controller-operator-75c59d454f-d829c\" (UID: \"f749f148-ae4b-475b-90d9-1028d134d57c\") " pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.433335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-582qb\" (UniqueName: \"kubernetes.io/projected/f749f148-ae4b-475b-90d9-1028d134d57c-kube-api-access-582qb\") pod \"openstack-operator-controller-operator-75c59d454f-d829c\" (UID: \"f749f148-ae4b-475b-90d9-1028d134d57c\") " pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.448935 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.449755 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gfmgm" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="registry-server" containerID="cri-o://f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007" gracePeriod=2 Jan 09 11:01:04 crc kubenswrapper[4727]: I0109 11:01:04.724633 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.186363 4727 generic.go:334] "Generic (PLEG): container finished" podID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerID="f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007" exitCode=0 Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.186466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerDied","Data":"f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007"} Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.223432 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c"] Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.334801 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.528014 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") pod \"8603c28d-35ff-45d9-a606-1ebc68271a2c\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.528531 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") pod \"8603c28d-35ff-45d9-a606-1ebc68271a2c\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.528719 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") pod \"8603c28d-35ff-45d9-a606-1ebc68271a2c\" (UID: \"8603c28d-35ff-45d9-a606-1ebc68271a2c\") " Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.529340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities" (OuterVolumeSpecName: "utilities") pod "8603c28d-35ff-45d9-a606-1ebc68271a2c" (UID: "8603c28d-35ff-45d9-a606-1ebc68271a2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.530713 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.537440 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj" (OuterVolumeSpecName: "kube-api-access-nngwj") pod "8603c28d-35ff-45d9-a606-1ebc68271a2c" (UID: "8603c28d-35ff-45d9-a606-1ebc68271a2c"). InnerVolumeSpecName "kube-api-access-nngwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.571845 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8603c28d-35ff-45d9-a606-1ebc68271a2c" (UID: "8603c28d-35ff-45d9-a606-1ebc68271a2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.632410 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8603c28d-35ff-45d9-a606-1ebc68271a2c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:05 crc kubenswrapper[4727]: I0109 11:01:05.632793 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nngwj\" (UniqueName: \"kubernetes.io/projected/8603c28d-35ff-45d9-a606-1ebc68271a2c-kube-api-access-nngwj\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.204811 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" event={"ID":"f749f148-ae4b-475b-90d9-1028d134d57c","Type":"ContainerStarted","Data":"56c1335067c352d5069c2953bf0d4764bec967227c1782558152733b21e0e6f8"} Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.208753 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfmgm" event={"ID":"8603c28d-35ff-45d9-a606-1ebc68271a2c","Type":"ContainerDied","Data":"7c9e196247751a19487e29d29ee8b3ce5e616d5d41e3a07b02bd1a1c6242552b"} Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.208801 4727 scope.go:117] "RemoveContainer" containerID="f58aeac270ff45596ca9606a0784c1acd6e30f9fd5fa7618ad4c56c8f39b1007" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.208840 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfmgm" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.239425 4727 scope.go:117] "RemoveContainer" containerID="ff22aa3eacf371747748cec36311e35c2c5ebb77ee5b07f9cd43b5e1f320411e" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.255712 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.267611 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfmgm"] Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.295126 4727 scope.go:117] "RemoveContainer" containerID="504edafdc06587cedd1404b889f20e4ee1038b1c7d57904249507ee37b13d657" Jan 09 11:01:06 crc kubenswrapper[4727]: E0109 11:01:06.361430 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8603c28d_35ff_45d9_a606_1ebc68271a2c.slice\": RecentStats: unable to find data in memory cache]" Jan 09 11:01:06 crc kubenswrapper[4727]: I0109 11:01:06.867876 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" path="/var/lib/kubelet/pods/8603c28d-35ff-45d9-a606-1ebc68271a2c/volumes" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.456223 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:10 crc kubenswrapper[4727]: E0109 11:01:10.457301 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="extract-content" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.457388 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="extract-content" Jan 09 11:01:10 crc kubenswrapper[4727]: E0109 11:01:10.457492 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="registry-server" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.457542 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="registry-server" Jan 09 11:01:10 crc kubenswrapper[4727]: E0109 11:01:10.457565 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="extract-utilities" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.457576 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="extract-utilities" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.457770 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8603c28d-35ff-45d9-a606-1ebc68271a2c" containerName="registry-server" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.459080 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.467859 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.507347 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.507749 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.507799 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.608957 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.609046 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.609076 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.609936 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.610151 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.649702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") pod \"certified-operators-tjjfx\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:10 crc kubenswrapper[4727]: I0109 11:01:10.794026 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:11 crc kubenswrapper[4727]: I0109 11:01:11.757767 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:11 crc kubenswrapper[4727]: W0109 11:01:11.776075 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded95bba5_81db_4552_bb87_8197e56d1164.slice/crio-79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839 WatchSource:0}: Error finding container 79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839: Status 404 returned error can't find the container with id 79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839 Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.260309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" event={"ID":"f749f148-ae4b-475b-90d9-1028d134d57c","Type":"ContainerStarted","Data":"326423a7fa179eda3d4fe6c5fc6ed654a41b92c845e7d9d963d6226d2f0d20a7"} Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.261598 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.264193 4727 generic.go:334] "Generic (PLEG): container finished" podID="ed95bba5-81db-4552-bb87-8197e56d1164" containerID="279293e70d6049a7de0d4b5a0a88deb976d49f0ec5630012b133934b3b97e2ff" exitCode=0 Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.264232 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerDied","Data":"279293e70d6049a7de0d4b5a0a88deb976d49f0ec5630012b133934b3b97e2ff"} Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.264254 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerStarted","Data":"79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839"} Jan 09 11:01:12 crc kubenswrapper[4727]: I0109 11:01:12.314395 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" podStartSLOduration=1.9694940920000001 podStartE2EDuration="8.314360381s" podCreationTimestamp="2026-01-09 11:01:04 +0000 UTC" firstStartedPulling="2026-01-09 11:01:05.240623731 +0000 UTC m=+910.690528512" lastFinishedPulling="2026-01-09 11:01:11.58549001 +0000 UTC m=+917.035394801" observedRunningTime="2026-01-09 11:01:12.308418789 +0000 UTC m=+917.758323640" watchObservedRunningTime="2026-01-09 11:01:12.314360381 +0000 UTC m=+917.764265242" Jan 09 11:01:13 crc kubenswrapper[4727]: I0109 11:01:13.272233 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerStarted","Data":"63f44a7691bb4bacec57fb3d74922b1aa737cc8d5fcd0300fcdb093d30340e7c"} Jan 09 11:01:14 crc kubenswrapper[4727]: I0109 11:01:14.307135 4727 generic.go:334] "Generic (PLEG): container finished" podID="ed95bba5-81db-4552-bb87-8197e56d1164" containerID="63f44a7691bb4bacec57fb3d74922b1aa737cc8d5fcd0300fcdb093d30340e7c" exitCode=0 Jan 09 11:01:14 crc kubenswrapper[4727]: I0109 11:01:14.307460 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerDied","Data":"63f44a7691bb4bacec57fb3d74922b1aa737cc8d5fcd0300fcdb093d30340e7c"} Jan 09 11:01:16 crc kubenswrapper[4727]: I0109 11:01:16.323757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerStarted","Data":"27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28"} Jan 09 11:01:16 crc kubenswrapper[4727]: I0109 11:01:16.353563 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tjjfx" podStartSLOduration=3.386835686 podStartE2EDuration="6.353535873s" podCreationTimestamp="2026-01-09 11:01:10 +0000 UTC" firstStartedPulling="2026-01-09 11:01:12.267409477 +0000 UTC m=+917.717314298" lastFinishedPulling="2026-01-09 11:01:15.234109704 +0000 UTC m=+920.684014485" observedRunningTime="2026-01-09 11:01:16.347060315 +0000 UTC m=+921.796965096" watchObservedRunningTime="2026-01-09 11:01:16.353535873 +0000 UTC m=+921.803440694" Jan 09 11:01:20 crc kubenswrapper[4727]: I0109 11:01:20.795249 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:20 crc kubenswrapper[4727]: I0109 11:01:20.796087 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:20 crc kubenswrapper[4727]: I0109 11:01:20.844436 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:21 crc kubenswrapper[4727]: I0109 11:01:21.414085 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:21 crc kubenswrapper[4727]: I0109 11:01:21.463269 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:23 crc kubenswrapper[4727]: I0109 11:01:23.379300 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tjjfx" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="registry-server" containerID="cri-o://27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28" gracePeriod=2 Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.390992 4727 generic.go:334] "Generic (PLEG): container finished" podID="ed95bba5-81db-4552-bb87-8197e56d1164" containerID="27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28" exitCode=0 Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.391100 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerDied","Data":"27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28"} Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.525773 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.533315 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.548429 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.735159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.735537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.735589 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.737194 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-75c59d454f-d829c" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.841970 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.842794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.843052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.844556 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.847017 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.866034 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") pod \"community-operators-6xmrq\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.869886 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:24 crc kubenswrapper[4727]: I0109 11:01:24.940265 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.049996 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") pod \"ed95bba5-81db-4552-bb87-8197e56d1164\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.050397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") pod \"ed95bba5-81db-4552-bb87-8197e56d1164\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.050457 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") pod \"ed95bba5-81db-4552-bb87-8197e56d1164\" (UID: \"ed95bba5-81db-4552-bb87-8197e56d1164\") " Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.051436 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities" (OuterVolumeSpecName: "utilities") pod "ed95bba5-81db-4552-bb87-8197e56d1164" (UID: "ed95bba5-81db-4552-bb87-8197e56d1164"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.066823 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh" (OuterVolumeSpecName: "kube-api-access-kxmwh") pod "ed95bba5-81db-4552-bb87-8197e56d1164" (UID: "ed95bba5-81db-4552-bb87-8197e56d1164"). InnerVolumeSpecName "kube-api-access-kxmwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.132463 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed95bba5-81db-4552-bb87-8197e56d1164" (UID: "ed95bba5-81db-4552-bb87-8197e56d1164"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.152435 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.152483 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed95bba5-81db-4552-bb87-8197e56d1164-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.152498 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxmwh\" (UniqueName: \"kubernetes.io/projected/ed95bba5-81db-4552-bb87-8197e56d1164-kube-api-access-kxmwh\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.400760 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjjfx" event={"ID":"ed95bba5-81db-4552-bb87-8197e56d1164","Type":"ContainerDied","Data":"79c902763eaf5a734de1eb147cda0895915230269733c584118180f897d96839"} Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.400857 4727 scope.go:117] "RemoveContainer" containerID="27bdfd5322a0f9a1b43421f86ee74595b0fec8d5fcc46f668167d623e45ced28" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.400861 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjjfx" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.428954 4727 scope.go:117] "RemoveContainer" containerID="63f44a7691bb4bacec57fb3d74922b1aa737cc8d5fcd0300fcdb093d30340e7c" Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.449669 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.460584 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tjjfx"] Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.479657 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:25 crc kubenswrapper[4727]: I0109 11:01:25.484793 4727 scope.go:117] "RemoveContainer" containerID="279293e70d6049a7de0d4b5a0a88deb976d49f0ec5630012b133934b3b97e2ff" Jan 09 11:01:26 crc kubenswrapper[4727]: I0109 11:01:26.411367 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9820648-5736-44d6-a4de-d859613ca72a" containerID="38b4bd7ad7d6efe596ba4944480bad75ce4aef56c76d9b2f4b7d7952a14c730e" exitCode=0 Jan 09 11:01:26 crc kubenswrapper[4727]: I0109 11:01:26.411402 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerDied","Data":"38b4bd7ad7d6efe596ba4944480bad75ce4aef56c76d9b2f4b7d7952a14c730e"} Jan 09 11:01:26 crc kubenswrapper[4727]: I0109 11:01:26.411714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerStarted","Data":"6375e166651f8e6dda4be61b8f9e768148ab8e9d2f7cb5925876ed999a6c55dd"} Jan 09 11:01:26 crc kubenswrapper[4727]: I0109 11:01:26.869346 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" path="/var/lib/kubelet/pods/ed95bba5-81db-4552-bb87-8197e56d1164/volumes" Jan 09 11:01:28 crc kubenswrapper[4727]: I0109 11:01:28.428178 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerStarted","Data":"33e4a6186aa58a3098ef71e752b399ab163d708895e4cb715021d1108a6b6db9"} Jan 09 11:01:29 crc kubenswrapper[4727]: I0109 11:01:29.441229 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9820648-5736-44d6-a4de-d859613ca72a" containerID="33e4a6186aa58a3098ef71e752b399ab163d708895e4cb715021d1108a6b6db9" exitCode=0 Jan 09 11:01:29 crc kubenswrapper[4727]: I0109 11:01:29.441303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerDied","Data":"33e4a6186aa58a3098ef71e752b399ab163d708895e4cb715021d1108a6b6db9"} Jan 09 11:01:30 crc kubenswrapper[4727]: I0109 11:01:30.451860 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerStarted","Data":"68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f"} Jan 09 11:01:30 crc kubenswrapper[4727]: I0109 11:01:30.476693 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6xmrq" podStartSLOduration=2.715979577 podStartE2EDuration="6.476667185s" podCreationTimestamp="2026-01-09 11:01:24 +0000 UTC" firstStartedPulling="2026-01-09 11:01:26.413677551 +0000 UTC m=+931.863582332" lastFinishedPulling="2026-01-09 11:01:30.174365159 +0000 UTC m=+935.624269940" observedRunningTime="2026-01-09 11:01:30.47305168 +0000 UTC m=+935.922956491" watchObservedRunningTime="2026-01-09 11:01:30.476667185 +0000 UTC m=+935.926571976" Jan 09 11:01:34 crc kubenswrapper[4727]: I0109 11:01:34.870502 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:34 crc kubenswrapper[4727]: I0109 11:01:34.870858 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:34 crc kubenswrapper[4727]: I0109 11:01:34.911686 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:35 crc kubenswrapper[4727]: I0109 11:01:35.527232 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:35 crc kubenswrapper[4727]: I0109 11:01:35.568523 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:37 crc kubenswrapper[4727]: I0109 11:01:37.494334 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6xmrq" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="registry-server" containerID="cri-o://68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f" gracePeriod=2 Jan 09 11:01:38 crc kubenswrapper[4727]: I0109 11:01:38.508769 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9820648-5736-44d6-a4de-d859613ca72a" containerID="68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f" exitCode=0 Jan 09 11:01:38 crc kubenswrapper[4727]: I0109 11:01:38.508857 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerDied","Data":"68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f"} Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.035892 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.200269 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") pod \"d9820648-5736-44d6-a4de-d859613ca72a\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.200364 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") pod \"d9820648-5736-44d6-a4de-d859613ca72a\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.200421 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") pod \"d9820648-5736-44d6-a4de-d859613ca72a\" (UID: \"d9820648-5736-44d6-a4de-d859613ca72a\") " Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.201600 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities" (OuterVolumeSpecName: "utilities") pod "d9820648-5736-44d6-a4de-d859613ca72a" (UID: "d9820648-5736-44d6-a4de-d859613ca72a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.215417 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf" (OuterVolumeSpecName: "kube-api-access-jgmdf") pod "d9820648-5736-44d6-a4de-d859613ca72a" (UID: "d9820648-5736-44d6-a4de-d859613ca72a"). InnerVolumeSpecName "kube-api-access-jgmdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.250712 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9820648-5736-44d6-a4de-d859613ca72a" (UID: "d9820648-5736-44d6-a4de-d859613ca72a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.302603 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.302702 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9820648-5736-44d6-a4de-d859613ca72a-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.302722 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgmdf\" (UniqueName: \"kubernetes.io/projected/d9820648-5736-44d6-a4de-d859613ca72a-kube-api-access-jgmdf\") on node \"crc\" DevicePath \"\"" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.405006 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.405087 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.520422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xmrq" event={"ID":"d9820648-5736-44d6-a4de-d859613ca72a","Type":"ContainerDied","Data":"6375e166651f8e6dda4be61b8f9e768148ab8e9d2f7cb5925876ed999a6c55dd"} Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.520533 4727 scope.go:117] "RemoveContainer" containerID="68298f54004e1f12b2ec689e58c44d0c080799c3610ec1af3f56996bec938e1f" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.520543 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xmrq" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.539475 4727 scope.go:117] "RemoveContainer" containerID="33e4a6186aa58a3098ef71e752b399ab163d708895e4cb715021d1108a6b6db9" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.558568 4727 scope.go:117] "RemoveContainer" containerID="38b4bd7ad7d6efe596ba4944480bad75ce4aef56c76d9b2f4b7d7952a14c730e" Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.570654 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:39 crc kubenswrapper[4727]: I0109 11:01:39.578291 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6xmrq"] Jan 09 11:01:40 crc kubenswrapper[4727]: I0109 11:01:40.871312 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9820648-5736-44d6-a4de-d859613ca72a" path="/var/lib/kubelet/pods/d9820648-5736-44d6-a4de-d859613ca72a/volumes" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.762854 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx"] Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763785 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="extract-content" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763810 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="extract-content" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763858 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763870 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763887 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="extract-content" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763902 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="extract-content" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763920 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="extract-utilities" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763931 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="extract-utilities" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763947 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763958 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: E0109 11:01:45.763978 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="extract-utilities" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.763989 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="extract-utilities" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.764167 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed95bba5-81db-4552-bb87-8197e56d1164" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.764197 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9820648-5736-44d6-a4de-d859613ca72a" containerName="registry-server" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.764973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.767240 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pht98" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.771238 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.772453 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.775742 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xjx8w" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.779688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.792803 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.793695 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.797440 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kr5dj" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.818305 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.821405 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.830844 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.834883 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.840938 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fpg6q" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.851298 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.852167 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.859950 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2szkv" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.860344 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.865561 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.892928 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njtg\" (UniqueName: \"kubernetes.io/projected/f57a8b19-1f94-4cc4-af28-f7c506f93de5-kube-api-access-2njtg\") pod \"barbican-operator-controller-manager-f6f74d6db-nd7lx\" (UID: \"f57a8b19-1f94-4cc4-af28-f7c506f93de5\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.893358 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxflt\" (UniqueName: \"kubernetes.io/projected/63639485-2ddb-4983-921a-9de5dda98f0f-kube-api-access-gxflt\") pod \"cinder-operator-controller-manager-78979fc445-l25ck\" (UID: \"63639485-2ddb-4983-921a-9de5dda98f0f\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.895596 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.896767 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.899075 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mv86g" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.914616 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.915682 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.924848 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.925468 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t6tcr" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.925748 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.925844 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.928422 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-r9zld" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.948055 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.962592 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.967078 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.978578 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.979639 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.985028 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5"] Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.985210 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-sftbs" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996260 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqm82\" (UniqueName: \"kubernetes.io/projected/9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6-kube-api-access-zqm82\") pod \"glance-operator-controller-manager-7b549fc966-w5c7d\" (UID: \"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996322 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvgt\" (UniqueName: \"kubernetes.io/projected/e8c91cda-4264-401f-83de-20ddcf5f0d4d-kube-api-access-9fvgt\") pod \"designate-operator-controller-manager-66f8b87655-l4fld\" (UID: \"e8c91cda-4264-401f-83de-20ddcf5f0d4d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996349 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc688\" (UniqueName: \"kubernetes.io/projected/24886819-7c1f-4b1f-880e-4b2102e302c1-kube-api-access-kc688\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996393 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njtg\" (UniqueName: \"kubernetes.io/projected/f57a8b19-1f94-4cc4-af28-f7c506f93de5-kube-api-access-2njtg\") pod \"barbican-operator-controller-manager-f6f74d6db-nd7lx\" (UID: \"f57a8b19-1f94-4cc4-af28-f7c506f93de5\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996436 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdq6\" (UniqueName: \"kubernetes.io/projected/9891b17e-81f9-4999-b489-db3e162c2a54-kube-api-access-zzdq6\") pod \"heat-operator-controller-manager-658dd65b86-s49vr\" (UID: \"9891b17e-81f9-4999-b489-db3e162c2a54\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxflt\" (UniqueName: \"kubernetes.io/projected/63639485-2ddb-4983-921a-9de5dda98f0f-kube-api-access-gxflt\") pod \"cinder-operator-controller-manager-78979fc445-l25ck\" (UID: \"63639485-2ddb-4983-921a-9de5dda98f0f\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:45 crc kubenswrapper[4727]: I0109 11:01:45.996551 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8nq\" (UniqueName: \"kubernetes.io/projected/51db22df-3d25-4c12-b104-eb3848940958-kube-api-access-sd8nq\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nxc7n\" (UID: \"51db22df-3d25-4c12-b104-eb3848940958\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.005901 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.006806 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.007856 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.009731 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bkcvd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.019189 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.023058 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9k9bz" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.055641 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njtg\" (UniqueName: \"kubernetes.io/projected/f57a8b19-1f94-4cc4-af28-f7c506f93de5-kube-api-access-2njtg\") pod \"barbican-operator-controller-manager-f6f74d6db-nd7lx\" (UID: \"f57a8b19-1f94-4cc4-af28-f7c506f93de5\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.055643 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxflt\" (UniqueName: \"kubernetes.io/projected/63639485-2ddb-4983-921a-9de5dda98f0f-kube-api-access-gxflt\") pod \"cinder-operator-controller-manager-78979fc445-l25ck\" (UID: \"63639485-2ddb-4983-921a-9de5dda98f0f\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.055742 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.083591 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.094491 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097448 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzdq6\" (UniqueName: \"kubernetes.io/projected/9891b17e-81f9-4999-b489-db3e162c2a54-kube-api-access-zzdq6\") pod \"heat-operator-controller-manager-658dd65b86-s49vr\" (UID: \"9891b17e-81f9-4999-b489-db3e162c2a54\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097550 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xfjp\" (UniqueName: \"kubernetes.io/projected/e4480343-1920-4926-8668-e47e5bbfb646-kube-api-access-2xfjp\") pod \"ironic-operator-controller-manager-f99f54bc8-g5ckd\" (UID: \"e4480343-1920-4926-8668-e47e5bbfb646\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8nq\" (UniqueName: \"kubernetes.io/projected/51db22df-3d25-4c12-b104-eb3848940958-kube-api-access-sd8nq\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nxc7n\" (UID: \"51db22df-3d25-4c12-b104-eb3848940958\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqm82\" (UniqueName: \"kubernetes.io/projected/9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6-kube-api-access-zqm82\") pod \"glance-operator-controller-manager-7b549fc966-w5c7d\" (UID: \"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097631 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqv5l\" (UniqueName: \"kubernetes.io/projected/6040cced-684e-4521-9c4e-1debba9d5320-kube-api-access-nqv5l\") pod \"keystone-operator-controller-manager-568985c78-4nzmw\" (UID: \"6040cced-684e-4521-9c4e-1debba9d5320\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvgt\" (UniqueName: \"kubernetes.io/projected/e8c91cda-4264-401f-83de-20ddcf5f0d4d-kube-api-access-9fvgt\") pod \"designate-operator-controller-manager-66f8b87655-l4fld\" (UID: \"e8c91cda-4264-401f-83de-20ddcf5f0d4d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.097678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc688\" (UniqueName: \"kubernetes.io/projected/24886819-7c1f-4b1f-880e-4b2102e302c1-kube-api-access-kc688\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.103756 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.103817 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:46.603801279 +0000 UTC m=+952.053706060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.107982 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.129602 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.130705 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.138081 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xpxkv" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.142359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc688\" (UniqueName: \"kubernetes.io/projected/24886819-7c1f-4b1f-880e-4b2102e302c1-kube-api-access-kc688\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.159082 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzdq6\" (UniqueName: \"kubernetes.io/projected/9891b17e-81f9-4999-b489-db3e162c2a54-kube-api-access-zzdq6\") pod \"heat-operator-controller-manager-658dd65b86-s49vr\" (UID: \"9891b17e-81f9-4999-b489-db3e162c2a54\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.162177 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqm82\" (UniqueName: \"kubernetes.io/projected/9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6-kube-api-access-zqm82\") pod \"glance-operator-controller-manager-7b549fc966-w5c7d\" (UID: \"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.166008 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.179670 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.180259 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.192976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvgt\" (UniqueName: \"kubernetes.io/projected/e8c91cda-4264-401f-83de-20ddcf5f0d4d-kube-api-access-9fvgt\") pod \"designate-operator-controller-manager-66f8b87655-l4fld\" (UID: \"e8c91cda-4264-401f-83de-20ddcf5f0d4d\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.194554 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.195701 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.198230 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vshzg" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.199142 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8nq\" (UniqueName: \"kubernetes.io/projected/51db22df-3d25-4c12-b104-eb3848940958-kube-api-access-sd8nq\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-nxc7n\" (UID: \"51db22df-3d25-4c12-b104-eb3848940958\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.199722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xfjp\" (UniqueName: \"kubernetes.io/projected/e4480343-1920-4926-8668-e47e5bbfb646-kube-api-access-2xfjp\") pod \"ironic-operator-controller-manager-f99f54bc8-g5ckd\" (UID: \"e4480343-1920-4926-8668-e47e5bbfb646\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.213341 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kww9n\" (UniqueName: \"kubernetes.io/projected/ddfee9e4-1084-4750-ab19-473dde7a2fb6-kube-api-access-kww9n\") pod \"manila-operator-controller-manager-598945d5b8-6gtz5\" (UID: \"ddfee9e4-1084-4750-ab19-473dde7a2fb6\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.213492 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqhbt\" (UniqueName: \"kubernetes.io/projected/e604d4a1-bf95-49df-a854-b15337b7fae7-kube-api-access-tqhbt\") pod \"mariadb-operator-controller-manager-7b88bfc995-4dv6h\" (UID: \"e604d4a1-bf95-49df-a854-b15337b7fae7\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.213671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqv5l\" (UniqueName: \"kubernetes.io/projected/6040cced-684e-4521-9c4e-1debba9d5320-kube-api-access-nqv5l\") pod \"keystone-operator-controller-manager-568985c78-4nzmw\" (UID: \"6040cced-684e-4521-9c4e-1debba9d5320\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.226683 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.238093 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqv5l\" (UniqueName: \"kubernetes.io/projected/6040cced-684e-4521-9c4e-1debba9d5320-kube-api-access-nqv5l\") pod \"keystone-operator-controller-manager-568985c78-4nzmw\" (UID: \"6040cced-684e-4521-9c4e-1debba9d5320\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.284936 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xfjp\" (UniqueName: \"kubernetes.io/projected/e4480343-1920-4926-8668-e47e5bbfb646-kube-api-access-2xfjp\") pod \"ironic-operator-controller-manager-f99f54bc8-g5ckd\" (UID: \"e4480343-1920-4926-8668-e47e5bbfb646\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.297331 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.301115 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.320649 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqhbt\" (UniqueName: \"kubernetes.io/projected/e604d4a1-bf95-49df-a854-b15337b7fae7-kube-api-access-tqhbt\") pod \"mariadb-operator-controller-manager-7b88bfc995-4dv6h\" (UID: \"e604d4a1-bf95-49df-a854-b15337b7fae7\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.320752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wwh\" (UniqueName: \"kubernetes.io/projected/9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb-kube-api-access-j7wwh\") pod \"nova-operator-controller-manager-5fbbf8b6cc-69kx5\" (UID: \"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.320880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgzb\" (UniqueName: \"kubernetes.io/projected/848b9588-10d2-4bd4-bcc0-cccd55334c85-kube-api-access-dlgzb\") pod \"neutron-operator-controller-manager-7cd87b778f-q8wx7\" (UID: \"848b9588-10d2-4bd4-bcc0-cccd55334c85\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.320908 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kww9n\" (UniqueName: \"kubernetes.io/projected/ddfee9e4-1084-4750-ab19-473dde7a2fb6-kube-api-access-kww9n\") pod \"manila-operator-controller-manager-598945d5b8-6gtz5\" (UID: \"ddfee9e4-1084-4750-ab19-473dde7a2fb6\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.356978 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.360959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqhbt\" (UniqueName: \"kubernetes.io/projected/e604d4a1-bf95-49df-a854-b15337b7fae7-kube-api-access-tqhbt\") pod \"mariadb-operator-controller-manager-7b88bfc995-4dv6h\" (UID: \"e604d4a1-bf95-49df-a854-b15337b7fae7\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.374985 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.375455 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kww9n\" (UniqueName: \"kubernetes.io/projected/ddfee9e4-1084-4750-ab19-473dde7a2fb6-kube-api-access-kww9n\") pod \"manila-operator-controller-manager-598945d5b8-6gtz5\" (UID: \"ddfee9e4-1084-4750-ab19-473dde7a2fb6\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.390376 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.395353 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ldn2c" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.419987 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.421233 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgzb\" (UniqueName: \"kubernetes.io/projected/848b9588-10d2-4bd4-bcc0-cccd55334c85-kube-api-access-dlgzb\") pod \"neutron-operator-controller-manager-7cd87b778f-q8wx7\" (UID: \"848b9588-10d2-4bd4-bcc0-cccd55334c85\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.421299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7wwh\" (UniqueName: \"kubernetes.io/projected/9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb-kube-api-access-j7wwh\") pod \"nova-operator-controller-manager-5fbbf8b6cc-69kx5\" (UID: \"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.421337 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8xk\" (UniqueName: \"kubernetes.io/projected/fab7e320-c116-4603-9aac-2e310be1b209-kube-api-access-zg8xk\") pod \"octavia-operator-controller-manager-68c649d9d-pnk72\" (UID: \"fab7e320-c116-4603-9aac-2e310be1b209\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.435862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.443850 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.444808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.450022 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tknwf" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.463192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgzb\" (UniqueName: \"kubernetes.io/projected/848b9588-10d2-4bd4-bcc0-cccd55334c85-kube-api-access-dlgzb\") pod \"neutron-operator-controller-manager-7cd87b778f-q8wx7\" (UID: \"848b9588-10d2-4bd4-bcc0-cccd55334c85\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.463304 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7wwh\" (UniqueName: \"kubernetes.io/projected/9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb-kube-api-access-j7wwh\") pod \"nova-operator-controller-manager-5fbbf8b6cc-69kx5\" (UID: \"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.463358 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.467921 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.469125 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.476223 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nw52s" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.491958 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.505987 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.516876 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.517192 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.518684 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.520916 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jb57n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.523182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8xk\" (UniqueName: \"kubernetes.io/projected/fab7e320-c116-4603-9aac-2e310be1b209-kube-api-access-zg8xk\") pod \"octavia-operator-controller-manager-68c649d9d-pnk72\" (UID: \"fab7e320-c116-4603-9aac-2e310be1b209\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.523248 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6d9\" (UniqueName: \"kubernetes.io/projected/15c1d49b-c086-4c30-9a99-e0fb597dd76f-kube-api-access-6l6d9\") pod \"placement-operator-controller-manager-9b6f8f78c-cc8k9\" (UID: \"15c1d49b-c086-4c30-9a99-e0fb597dd76f\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.527655 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.528247 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.530040 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.532687 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4hsw8" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.550172 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.563344 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.564251 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8xk\" (UniqueName: \"kubernetes.io/projected/fab7e320-c116-4603-9aac-2e310be1b209-kube-api-access-zg8xk\") pod \"octavia-operator-controller-manager-68c649d9d-pnk72\" (UID: \"fab7e320-c116-4603-9aac-2e310be1b209\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.569543 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.570737 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.600300 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.603256 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.619587 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.630701 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfzt\" (UniqueName: \"kubernetes.io/projected/558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6-kube-api-access-9wfzt\") pod \"ovn-operator-controller-manager-bf6d4f946-gkkm4\" (UID: \"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.630766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l6d\" (UniqueName: \"kubernetes.io/projected/3550e1cd-642e-481c-b98f-b6d3770f51ca-kube-api-access-v2l6d\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.630824 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8t6\" (UniqueName: \"kubernetes.io/projected/c371fa9c-dd02-4673-99aa-4ec8fa8d9e07-kube-api-access-rd8t6\") pod \"telemetry-operator-controller-manager-68d988df55-x4r9z\" (UID: \"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.630956 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l6d9\" (UniqueName: \"kubernetes.io/projected/15c1d49b-c086-4c30-9a99-e0fb597dd76f-kube-api-access-6l6d9\") pod \"placement-operator-controller-manager-9b6f8f78c-cc8k9\" (UID: \"15c1d49b-c086-4c30-9a99-e0fb597dd76f\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.631001 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.631032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n754b\" (UniqueName: \"kubernetes.io/projected/ba0be6cc-1e31-4421-aa33-1e2514069376-kube-api-access-n754b\") pod \"swift-operator-controller-manager-bb586bbf4-vgcgj\" (UID: \"ba0be6cc-1e31-4421-aa33-1e2514069376\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.631072 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.655725 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.655802 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:47.655778663 +0000 UTC m=+953.105683444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.655847 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.658041 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wdt6n" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.678491 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.679548 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.685886 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fxndl" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.686904 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.689908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l6d9\" (UniqueName: \"kubernetes.io/projected/15c1d49b-c086-4c30-9a99-e0fb597dd76f-kube-api-access-6l6d9\") pod \"placement-operator-controller-manager-9b6f8f78c-cc8k9\" (UID: \"15c1d49b-c086-4c30-9a99-e0fb597dd76f\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.702946 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.703944 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.712717 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7kdz4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732690 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfzt\" (UniqueName: \"kubernetes.io/projected/558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6-kube-api-access-9wfzt\") pod \"ovn-operator-controller-manager-bf6d4f946-gkkm4\" (UID: \"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732736 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2l6d\" (UniqueName: \"kubernetes.io/projected/3550e1cd-642e-481c-b98f-b6d3770f51ca-kube-api-access-v2l6d\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732766 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8t6\" (UniqueName: \"kubernetes.io/projected/c371fa9c-dd02-4673-99aa-4ec8fa8d9e07-kube-api-access-rd8t6\") pod \"telemetry-operator-controller-manager-68d988df55-x4r9z\" (UID: \"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwcn\" (UniqueName: \"kubernetes.io/projected/e3f94965-fce3-4e35-9f97-5047e05dd50a-kube-api-access-vxwcn\") pod \"test-operator-controller-manager-6c866cfdcb-m8s9d\" (UID: \"e3f94965-fce3-4e35-9f97-5047e05dd50a\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732865 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2lj\" (UniqueName: \"kubernetes.io/projected/9300f2a9-97a8-4868-9485-8dd5d51df39e-kube-api-access-ms2lj\") pod \"watcher-operator-controller-manager-9dbdf6486-jvkn5\" (UID: \"9300f2a9-97a8-4868-9485-8dd5d51df39e\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732905 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n754b\" (UniqueName: \"kubernetes.io/projected/ba0be6cc-1e31-4421-aa33-1e2514069376-kube-api-access-n754b\") pod \"swift-operator-controller-manager-bb586bbf4-vgcgj\" (UID: \"ba0be6cc-1e31-4421-aa33-1e2514069376\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.732936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.733101 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: E0109 11:01:46.733148 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:01:47.233131489 +0000 UTC m=+952.683036270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.737204 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.746781 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.764593 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8t6\" (UniqueName: \"kubernetes.io/projected/c371fa9c-dd02-4673-99aa-4ec8fa8d9e07-kube-api-access-rd8t6\") pod \"telemetry-operator-controller-manager-68d988df55-x4r9z\" (UID: \"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.783027 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfzt\" (UniqueName: \"kubernetes.io/projected/558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6-kube-api-access-9wfzt\") pod \"ovn-operator-controller-manager-bf6d4f946-gkkm4\" (UID: \"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.801977 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n754b\" (UniqueName: \"kubernetes.io/projected/ba0be6cc-1e31-4421-aa33-1e2514069376-kube-api-access-n754b\") pod \"swift-operator-controller-manager-bb586bbf4-vgcgj\" (UID: \"ba0be6cc-1e31-4421-aa33-1e2514069376\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.802445 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2l6d\" (UniqueName: \"kubernetes.io/projected/3550e1cd-642e-481c-b98f-b6d3770f51ca-kube-api-access-v2l6d\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.819088 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.820209 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.833011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.834178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwcn\" (UniqueName: \"kubernetes.io/projected/e3f94965-fce3-4e35-9f97-5047e05dd50a-kube-api-access-vxwcn\") pod \"test-operator-controller-manager-6c866cfdcb-m8s9d\" (UID: \"e3f94965-fce3-4e35-9f97-5047e05dd50a\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.834224 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2lj\" (UniqueName: \"kubernetes.io/projected/9300f2a9-97a8-4868-9485-8dd5d51df39e-kube-api-access-ms2lj\") pod \"watcher-operator-controller-manager-9dbdf6486-jvkn5\" (UID: \"9300f2a9-97a8-4868-9485-8dd5d51df39e\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.834898 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gggbj" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.835086 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.835207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.864086 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwcn\" (UniqueName: \"kubernetes.io/projected/e3f94965-fce3-4e35-9f97-5047e05dd50a-kube-api-access-vxwcn\") pod \"test-operator-controller-manager-6c866cfdcb-m8s9d\" (UID: \"e3f94965-fce3-4e35-9f97-5047e05dd50a\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.865905 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2lj\" (UniqueName: \"kubernetes.io/projected/9300f2a9-97a8-4868-9485-8dd5d51df39e-kube-api-access-ms2lj\") pod \"watcher-operator-controller-manager-9dbdf6486-jvkn5\" (UID: \"9300f2a9-97a8-4868-9485-8dd5d51df39e\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.872047 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.910026 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.911159 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.919589 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jh84c" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.924359 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz"] Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.945994 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.946158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.946474 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbbt\" (UniqueName: \"kubernetes.io/projected/6a33b307-e521-43c4-8e35-3e9d7d553716-kube-api-access-spbbt\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:46 crc kubenswrapper[4727]: I0109 11:01:46.996930 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.038021 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.047972 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pth4f\" (UniqueName: \"kubernetes.io/projected/ee5399a2-4352-4013-9c26-a40e4bc815e3-kube-api-access-pth4f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2m6mz\" (UID: \"ee5399a2-4352-4013-9c26-a40e4bc815e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.048038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.048096 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spbbt\" (UniqueName: \"kubernetes.io/projected/6a33b307-e521-43c4-8e35-3e9d7d553716-kube-api-access-spbbt\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.048183 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.048307 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.048356 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:47.54833881 +0000 UTC m=+952.998243581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.048701 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.048716 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx"] Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.048725 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:47.548718371 +0000 UTC m=+952.998623152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.056913 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.070907 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spbbt\" (UniqueName: \"kubernetes.io/projected/6a33b307-e521-43c4-8e35-3e9d7d553716-kube-api-access-spbbt\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.080648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.104158 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.144582 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf57a8b19_1f94_4cc4_af28_f7c506f93de5.slice/crio-31ebfe957d824b786efa5267733297cea566ec67f0a8a3aa321e17033e06ae33 WatchSource:0}: Error finding container 31ebfe957d824b786efa5267733297cea566ec67f0a8a3aa321e17033e06ae33: Status 404 returned error can't find the container with id 31ebfe957d824b786efa5267733297cea566ec67f0a8a3aa321e17033e06ae33 Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.147005 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e494b5d_8aeb_47ed_b0a6_5e83b7f58bf6.slice/crio-4f021276c1c62a26cb8b92b1699b276d7c260ad72e43a56a3634f549247a75be WatchSource:0}: Error finding container 4f021276c1c62a26cb8b92b1699b276d7c260ad72e43a56a3634f549247a75be: Status 404 returned error can't find the container with id 4f021276c1c62a26cb8b92b1699b276d7c260ad72e43a56a3634f549247a75be Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.149063 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pth4f\" (UniqueName: \"kubernetes.io/projected/ee5399a2-4352-4013-9c26-a40e4bc815e3-kube-api-access-pth4f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2m6mz\" (UID: \"ee5399a2-4352-4013-9c26-a40e4bc815e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.198230 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pth4f\" (UniqueName: \"kubernetes.io/projected/ee5399a2-4352-4013-9c26-a40e4bc815e3-kube-api-access-pth4f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2m6mz\" (UID: \"ee5399a2-4352-4013-9c26-a40e4bc815e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.250389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.250643 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.250713 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:01:48.250690325 +0000 UTC m=+953.700595106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.251198 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.558346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.558873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.558695 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.559104 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:48.559087958 +0000 UTC m=+954.008992739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.559047 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.559633 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:48.559625313 +0000 UTC m=+954.009530094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.608567 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" event={"ID":"f57a8b19-1f94-4cc4-af28-f7c506f93de5","Type":"ContainerStarted","Data":"31ebfe957d824b786efa5267733297cea566ec67f0a8a3aa321e17033e06ae33"} Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.609626 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" event={"ID":"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6","Type":"ContainerStarted","Data":"4f021276c1c62a26cb8b92b1699b276d7c260ad72e43a56a3634f549247a75be"} Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.663291 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.663619 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: E0109 11:01:47.663699 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:49.663680765 +0000 UTC m=+955.113585536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.706754 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.731406 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.755162 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.763740 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.782026 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.795322 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5"] Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.798452 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c91cda_4264_401f_83de_20ddcf5f0d4d.slice/crio-ab9da46e3161ef35821ff75f0aaae1855733c9dc6bbceea4c1b0eacd8b39fe55 WatchSource:0}: Error finding container ab9da46e3161ef35821ff75f0aaae1855733c9dc6bbceea4c1b0eacd8b39fe55: Status 404 returned error can't find the container with id ab9da46e3161ef35821ff75f0aaae1855733c9dc6bbceea4c1b0eacd8b39fe55 Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.803592 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.812564 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.818929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd"] Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.828885 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4480343_1920_4926_8668_e47e5bbfb646.slice/crio-5f04a90bc595b98b9bfc7a25e9bf700a01d1175be3c1db53bf91c7a2f004edfe WatchSource:0}: Error finding container 5f04a90bc595b98b9bfc7a25e9bf700a01d1175be3c1db53bf91c7a2f004edfe: Status 404 returned error can't find the container with id 5f04a90bc595b98b9bfc7a25e9bf700a01d1175be3c1db53bf91c7a2f004edfe Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.959937 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.981652 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4"] Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.989211 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5"] Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.989872 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9300f2a9_97a8_4868_9485_8dd5d51df39e.slice/crio-7490bc5603798a7c2bfc2ec0618261ea8cfc65d86f6c9d2c362cd337493bdbe6 WatchSource:0}: Error finding container 7490bc5603798a7c2bfc2ec0618261ea8cfc65d86f6c9d2c362cd337493bdbe6: Status 404 returned error can't find the container with id 7490bc5603798a7c2bfc2ec0618261ea8cfc65d86f6c9d2c362cd337493bdbe6 Jan 09 11:01:47 crc kubenswrapper[4727]: I0109 11:01:47.995825 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9"] Jan 09 11:01:47 crc kubenswrapper[4727]: W0109 11:01:47.999111 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddfee9e4_1084_4750_ab19_473dde7a2fb6.slice/crio-04a23301e1ba70c0b20a35bf44e4d062ef230b6acedf2e2d326c176809b4d6da WatchSource:0}: Error finding container 04a23301e1ba70c0b20a35bf44e4d062ef230b6acedf2e2d326c176809b4d6da: Status 404 returned error can't find the container with id 04a23301e1ba70c0b20a35bf44e4d062ef230b6acedf2e2d326c176809b4d6da Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.000970 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ms2lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-jvkn5_openstack-operators(9300f2a9-97a8-4868-9485-8dd5d51df39e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.000985 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l6d9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-cc8k9_openstack-operators(15c1d49b-c086-4c30-9a99-e0fb597dd76f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.002313 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5"] Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.002493 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" podUID="15c1d49b-c086-4c30-9a99-e0fb597dd76f" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.002581 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" podUID="9300f2a9-97a8-4868-9485-8dd5d51df39e" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.003271 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kww9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-6gtz5_openstack-operators(ddfee9e4-1084-4750-ab19-473dde7a2fb6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.004456 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" podUID="ddfee9e4-1084-4750-ab19-473dde7a2fb6" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.008457 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72"] Jan 09 11:01:48 crc kubenswrapper[4727]: W0109 11:01:48.011040 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfab7e320_c116_4603_9aac_2e310be1b209.slice/crio-ac999443c889e6d80f55ba9fb33ad8e656c87cbb76c7d923c5fc4612a9823808 WatchSource:0}: Error finding container ac999443c889e6d80f55ba9fb33ad8e656c87cbb76c7d923c5fc4612a9823808: Status 404 returned error can't find the container with id ac999443c889e6d80f55ba9fb33ad8e656c87cbb76c7d923c5fc4612a9823808 Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.014197 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zg8xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-pnk72_openstack-operators(fab7e320-c116-4603-9aac-2e310be1b209): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.015309 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" podUID="fab7e320-c116-4603-9aac-2e310be1b209" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.131015 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj"] Jan 09 11:01:48 crc kubenswrapper[4727]: W0109 11:01:48.134911 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba0be6cc_1e31_4421_aa33_1e2514069376.slice/crio-ec01a23b80ca85ad91eb48429a05b937e962258bc330e54c4b6671ada931d56f WatchSource:0}: Error finding container ec01a23b80ca85ad91eb48429a05b937e962258bc330e54c4b6671ada931d56f: Status 404 returned error can't find the container with id ec01a23b80ca85ad91eb48429a05b937e962258bc330e54c4b6671ada931d56f Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.151822 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz"] Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.156841 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z"] Jan 09 11:01:48 crc kubenswrapper[4727]: W0109 11:01:48.167173 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc371fa9c_dd02_4673_99aa_4ec8fa8d9e07.slice/crio-964d3b75a26498d321f071227b37eb88840afc71259633f497967b0c09ff1723 WatchSource:0}: Error finding container 964d3b75a26498d321f071227b37eb88840afc71259633f497967b0c09ff1723: Status 404 returned error can't find the container with id 964d3b75a26498d321f071227b37eb88840afc71259633f497967b0c09ff1723 Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.173700 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rd8t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-68d988df55-x4r9z_openstack-operators(c371fa9c-dd02-4673-99aa-4ec8fa8d9e07): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.175047 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" podUID="c371fa9c-dd02-4673-99aa-4ec8fa8d9e07" Jan 09 11:01:48 crc kubenswrapper[4727]: W0109 11:01:48.190668 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee5399a2_4352_4013_9c26_a40e4bc815e3.slice/crio-046639a83ce84a0909597263d692993755efea252809fa0e896682d280afe1dc WatchSource:0}: Error finding container 046639a83ce84a0909597263d692993755efea252809fa0e896682d280afe1dc: Status 404 returned error can't find the container with id 046639a83ce84a0909597263d692993755efea252809fa0e896682d280afe1dc Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.192121 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pth4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2m6mz_openstack-operators(ee5399a2-4352-4013-9c26-a40e4bc815e3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.193983 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" podUID="ee5399a2-4352-4013-9c26-a40e4bc815e3" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.274050 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.274302 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.274405 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:01:50.274380974 +0000 UTC m=+955.724285755 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.578359 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.578554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.578637 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.578685 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.578782 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:50.578730609 +0000 UTC m=+956.028635390 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.578817 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:50.578808421 +0000 UTC m=+956.028713212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.625543 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" event={"ID":"15c1d49b-c086-4c30-9a99-e0fb597dd76f","Type":"ContainerStarted","Data":"a756cb36cb11b3b33c8108da3617daa79fd8928405734a0f8b9274b42ab599c5"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.628347 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" podUID="15c1d49b-c086-4c30-9a99-e0fb597dd76f" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.629124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" event={"ID":"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07","Type":"ContainerStarted","Data":"964d3b75a26498d321f071227b37eb88840afc71259633f497967b0c09ff1723"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.636129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" event={"ID":"e604d4a1-bf95-49df-a854-b15337b7fae7","Type":"ContainerStarted","Data":"e5a945f53cbd569d1611ccecf6d63a02ce59f5ade3fe1d9f687ebbb5eedc4d72"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.637131 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" podUID="c371fa9c-dd02-4673-99aa-4ec8fa8d9e07" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.641944 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" event={"ID":"9891b17e-81f9-4999-b489-db3e162c2a54","Type":"ContainerStarted","Data":"aaba35acac5990b88021453d6173eb0cdf03cf7658472ecac5ab4fb85b091ffc"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.647724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" event={"ID":"ee5399a2-4352-4013-9c26-a40e4bc815e3","Type":"ContainerStarted","Data":"046639a83ce84a0909597263d692993755efea252809fa0e896682d280afe1dc"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.649878 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" podUID="ee5399a2-4352-4013-9c26-a40e4bc815e3" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.650546 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" event={"ID":"e3f94965-fce3-4e35-9f97-5047e05dd50a","Type":"ContainerStarted","Data":"721ed2ab96f86a54afb6fffb5e390165b5f9b68ef273933572b79e1a458625e6"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.653297 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" event={"ID":"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb","Type":"ContainerStarted","Data":"24532db6f50a9696ff5f485e6ab155d385e9253ad98ea34448311a92e8dd6c05"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.664035 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" event={"ID":"9300f2a9-97a8-4868-9485-8dd5d51df39e","Type":"ContainerStarted","Data":"7490bc5603798a7c2bfc2ec0618261ea8cfc65d86f6c9d2c362cd337493bdbe6"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.666268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" event={"ID":"51db22df-3d25-4c12-b104-eb3848940958","Type":"ContainerStarted","Data":"48743ec3f802836fe1d9cdd56b96cc1dbe5d84bb875d3d21e62e04b40d4a6f9f"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.667852 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" podUID="9300f2a9-97a8-4868-9485-8dd5d51df39e" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.677366 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" event={"ID":"848b9588-10d2-4bd4-bcc0-cccd55334c85","Type":"ContainerStarted","Data":"061118e73ac27746b69fb9b2f2017919f8c96781dd747f9fb14baa5fb2ab70b6"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.697946 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" event={"ID":"ddfee9e4-1084-4750-ab19-473dde7a2fb6","Type":"ContainerStarted","Data":"04a23301e1ba70c0b20a35bf44e4d062ef230b6acedf2e2d326c176809b4d6da"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.702950 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" podUID="ddfee9e4-1084-4750-ab19-473dde7a2fb6" Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.704779 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" event={"ID":"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6","Type":"ContainerStarted","Data":"a67d5f210f9baf82a8f41f7e3259d08d199abed1c186da38111f6756b12f53d4"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.720055 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" event={"ID":"ba0be6cc-1e31-4421-aa33-1e2514069376","Type":"ContainerStarted","Data":"ec01a23b80ca85ad91eb48429a05b937e962258bc330e54c4b6671ada931d56f"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.721949 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" event={"ID":"63639485-2ddb-4983-921a-9de5dda98f0f","Type":"ContainerStarted","Data":"2d39ba517bfa72e25c5713e884408d015e8a01c6b0bfec670b9028cf641909fb"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.727093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" event={"ID":"e8c91cda-4264-401f-83de-20ddcf5f0d4d","Type":"ContainerStarted","Data":"ab9da46e3161ef35821ff75f0aaae1855733c9dc6bbceea4c1b0eacd8b39fe55"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.728454 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" event={"ID":"6040cced-684e-4521-9c4e-1debba9d5320","Type":"ContainerStarted","Data":"ee1b87ead52e3b6aabff4bc3e39a72a3182bd005cab4e5e2537dd152b6281469"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.732288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" event={"ID":"e4480343-1920-4926-8668-e47e5bbfb646","Type":"ContainerStarted","Data":"5f04a90bc595b98b9bfc7a25e9bf700a01d1175be3c1db53bf91c7a2f004edfe"} Jan 09 11:01:48 crc kubenswrapper[4727]: I0109 11:01:48.735848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" event={"ID":"fab7e320-c116-4603-9aac-2e310be1b209","Type":"ContainerStarted","Data":"ac999443c889e6d80f55ba9fb33ad8e656c87cbb76c7d923c5fc4612a9823808"} Jan 09 11:01:48 crc kubenswrapper[4727]: E0109 11:01:48.740831 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" podUID="fab7e320-c116-4603-9aac-2e310be1b209" Jan 09 11:01:49 crc kubenswrapper[4727]: I0109 11:01:49.718814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.718997 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.719341 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:53.719322432 +0000 UTC m=+959.169227203 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.765779 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" podUID="9300f2a9-97a8-4868-9485-8dd5d51df39e" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.765798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" podUID="ddfee9e4-1084-4750-ab19-473dde7a2fb6" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.765858 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" podUID="fab7e320-c116-4603-9aac-2e310be1b209" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.766101 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" podUID="c371fa9c-dd02-4673-99aa-4ec8fa8d9e07" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.766164 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" podUID="15c1d49b-c086-4c30-9a99-e0fb597dd76f" Jan 09 11:01:49 crc kubenswrapper[4727]: E0109 11:01:49.774421 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" podUID="ee5399a2-4352-4013-9c26-a40e4bc815e3" Jan 09 11:01:50 crc kubenswrapper[4727]: I0109 11:01:50.330396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.330817 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.330938 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:01:54.330918378 +0000 UTC m=+959.780823159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: I0109 11:01:50.636320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:50 crc kubenswrapper[4727]: I0109 11:01:50.636384 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.636568 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.636653 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:54.636633433 +0000 UTC m=+960.086538214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.641583 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:50 crc kubenswrapper[4727]: E0109 11:01:50.641667 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:01:54.641649849 +0000 UTC m=+960.091554620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:01:53 crc kubenswrapper[4727]: I0109 11:01:53.790585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:01:53 crc kubenswrapper[4727]: E0109 11:01:53.790780 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:53 crc kubenswrapper[4727]: E0109 11:01:53.791214 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:01.791186783 +0000 UTC m=+967.241091604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: I0109 11:01:54.399587 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.399771 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.399842 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert podName:3550e1cd-642e-481c-b98f-b6d3770f51ca nodeName:}" failed. No retries permitted until 2026-01-09 11:02:02.399825453 +0000 UTC m=+967.849730234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" (UID: "3550e1cd-642e-481c-b98f-b6d3770f51ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: I0109 11:01:54.703794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:54 crc kubenswrapper[4727]: I0109 11:01:54.703928 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.703994 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.704064 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.704086 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:02.704063526 +0000 UTC m=+968.153968307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:01:54 crc kubenswrapper[4727]: E0109 11:01:54.704106 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:02.704095146 +0000 UTC m=+968.153999927 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:02:01 crc kubenswrapper[4727]: I0109 11:02:01.821659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:01 crc kubenswrapper[4727]: E0109 11:02:01.821923 4727 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 09 11:02:01 crc kubenswrapper[4727]: E0109 11:02:01.822310 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert podName:24886819-7c1f-4b1f-880e-4b2102e302c1 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:17.822287116 +0000 UTC m=+983.272191977 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert") pod "infra-operator-controller-manager-6d99759cf-qpmcd" (UID: "24886819-7c1f-4b1f-880e-4b2102e302c1") : secret "infra-operator-webhook-server-cert" not found Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.433427 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.444087 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3550e1cd-642e-481c-b98f-b6d3770f51ca-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh\" (UID: \"3550e1cd-642e-481c-b98f-b6d3770f51ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.581477 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tknwf" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.589824 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.738579 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:02 crc kubenswrapper[4727]: I0109 11:02:02.738653 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:02 crc kubenswrapper[4727]: E0109 11:02:02.738722 4727 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 09 11:02:02 crc kubenswrapper[4727]: E0109 11:02:02.738789 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:18.738773502 +0000 UTC m=+984.188678283 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "metrics-server-cert" not found Jan 09 11:02:02 crc kubenswrapper[4727]: E0109 11:02:02.738806 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 09 11:02:02 crc kubenswrapper[4727]: E0109 11:02:02.738888 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs podName:6a33b307-e521-43c4-8e35-3e9d7d553716 nodeName:}" failed. No retries permitted until 2026-01-09 11:02:18.738867546 +0000 UTC m=+984.188772337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs") pod "openstack-operator-controller-manager-7db9fd4464-5h9ft" (UID: "6a33b307-e521-43c4-8e35-3e9d7d553716") : secret "webhook-server-cert" not found Jan 09 11:02:04 crc kubenswrapper[4727]: E0109 11:02:04.073880 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04" Jan 09 11:02:04 crc kubenswrapper[4727]: E0109 11:02:04.074202 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzdq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-658dd65b86-s49vr_openstack-operators(9891b17e-81f9-4999-b489-db3e162c2a54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:04 crc kubenswrapper[4727]: E0109 11:02:04.075554 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" podUID="9891b17e-81f9-4999-b489-db3e162c2a54" Jan 09 11:02:04 crc kubenswrapper[4727]: E0109 11:02:04.913478 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04\\\"\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" podUID="9891b17e-81f9-4999-b489-db3e162c2a54" Jan 09 11:02:05 crc kubenswrapper[4727]: E0109 11:02:05.756018 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7" Jan 09 11:02:05 crc kubenswrapper[4727]: E0109 11:02:05.756712 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n754b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-vgcgj_openstack-operators(ba0be6cc-1e31-4421-aa33-1e2514069376): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:05 crc kubenswrapper[4727]: E0109 11:02:05.757953 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" podUID="ba0be6cc-1e31-4421-aa33-1e2514069376" Jan 09 11:02:05 crc kubenswrapper[4727]: E0109 11:02:05.925587 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" podUID="ba0be6cc-1e31-4421-aa33-1e2514069376" Jan 09 11:02:06 crc kubenswrapper[4727]: E0109 11:02:06.349331 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41" Jan 09 11:02:06 crc kubenswrapper[4727]: E0109 11:02:06.349562 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqhbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b88bfc995-4dv6h_openstack-operators(e604d4a1-bf95-49df-a854-b15337b7fae7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:06 crc kubenswrapper[4727]: E0109 11:02:06.350767 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" podUID="e604d4a1-bf95-49df-a854-b15337b7fae7" Jan 09 11:02:06 crc kubenswrapper[4727]: E0109 11:02:06.931879 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c10647131e6fa6afeb11ea28e513b60f22dbfbb4ddc3727850b1fe5799890c41\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" podUID="e604d4a1-bf95-49df-a854-b15337b7fae7" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.084464 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.084780 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wfzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-gkkm4_openstack-operators(558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.085976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" podUID="558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.605643 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.606030 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xfjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-f99f54bc8-g5ckd_openstack-operators(e4480343-1920-4926-8668-e47e5bbfb646): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.607382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" podUID="e4480343-1920-4926-8668-e47e5bbfb646" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.938277 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" podUID="e4480343-1920-4926-8668-e47e5bbfb646" Jan 09 11:02:07 crc kubenswrapper[4727]: E0109 11:02:07.939858 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" podUID="558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6" Jan 09 11:02:09 crc kubenswrapper[4727]: I0109 11:02:09.405396 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:02:09 crc kubenswrapper[4727]: I0109 11:02:09.405992 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:02:09 crc kubenswrapper[4727]: E0109 11:02:09.709237 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 09 11:02:09 crc kubenswrapper[4727]: E0109 11:02:09.709465 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7wwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-69kx5_openstack-operators(9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:09 crc kubenswrapper[4727]: E0109 11:02:09.710762 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" podUID="9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb" Jan 09 11:02:09 crc kubenswrapper[4727]: E0109 11:02:09.953531 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" podUID="9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb" Jan 09 11:02:13 crc kubenswrapper[4727]: E0109 11:02:13.922650 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c" Jan 09 11:02:13 crc kubenswrapper[4727]: E0109 11:02:13.924168 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqv5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-4nzmw_openstack-operators(6040cced-684e-4521-9c4e-1debba9d5320): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:02:13 crc kubenswrapper[4727]: E0109 11:02:13.925958 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" podUID="6040cced-684e-4521-9c4e-1debba9d5320" Jan 09 11:02:13 crc kubenswrapper[4727]: E0109 11:02:13.983527 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" podUID="6040cced-684e-4521-9c4e-1debba9d5320" Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.613537 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh"] Jan 09 11:02:14 crc kubenswrapper[4727]: W0109 11:02:14.617963 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3550e1cd_642e_481c_b98f_b6d3770f51ca.slice/crio-f29900c5ea56b5c6e58c3d31f9b25907345b2d13cd4cb9da4a0ac38cacbc90c9 WatchSource:0}: Error finding container f29900c5ea56b5c6e58c3d31f9b25907345b2d13cd4cb9da4a0ac38cacbc90c9: Status 404 returned error can't find the container with id f29900c5ea56b5c6e58c3d31f9b25907345b2d13cd4cb9da4a0ac38cacbc90c9 Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.989805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" event={"ID":"ddfee9e4-1084-4750-ab19-473dde7a2fb6","Type":"ContainerStarted","Data":"c3d247fa40c5480d5aab2f1f6dc84b14a8b413ccd080e599d429378eb5874d1b"} Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.990860 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.992973 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" event={"ID":"9300f2a9-97a8-4868-9485-8dd5d51df39e","Type":"ContainerStarted","Data":"837b452b4285068a8e89566b704f01a147caf7696f203df4cf53ab1d6e29ff05"} Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.993179 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.994899 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" event={"ID":"c371fa9c-dd02-4673-99aa-4ec8fa8d9e07","Type":"ContainerStarted","Data":"f8f3984e3e5f52173e77180f1dc930be0f613c15584dca1d15baa6d88cc21c50"} Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.995277 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.999789 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" event={"ID":"e8c91cda-4264-401f-83de-20ddcf5f0d4d","Type":"ContainerStarted","Data":"21f2277f2edb20274e26efd008a107ca81526a454e12ff5145af2a9690097ad4"} Jan 09 11:02:14 crc kubenswrapper[4727]: I0109 11:02:14.999941 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.003985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" event={"ID":"f57a8b19-1f94-4cc4-af28-f7c506f93de5","Type":"ContainerStarted","Data":"7ecfaeef59a2104b98f4104ea8a3b4a99b2a7f24fb2c13b20200de0f393b99e4"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.004087 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.005788 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" event={"ID":"63639485-2ddb-4983-921a-9de5dda98f0f","Type":"ContainerStarted","Data":"2509a2ab82303e8687651e9b58caeb210127f5593d47d45277da1ab313298b0c"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.005906 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.007646 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" event={"ID":"fab7e320-c116-4603-9aac-2e310be1b209","Type":"ContainerStarted","Data":"4ac20d6ec0be98bb89330d450d70b08ba3fd3d514ac3c38b707b4fd906d7bdb0"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.007840 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.010286 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" event={"ID":"848b9588-10d2-4bd4-bcc0-cccd55334c85","Type":"ContainerStarted","Data":"182193dbafe400f7dfd00197e79fc65c62aad46d4dfe895f2fce0b1c20e4ed6b"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.010406 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.011806 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" event={"ID":"ee5399a2-4352-4013-9c26-a40e4bc815e3","Type":"ContainerStarted","Data":"4e26075ecab307f19fe526a9072636d873a8255b9bbbd0d55e98e0c546e4f0f2"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.013144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" event={"ID":"9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6","Type":"ContainerStarted","Data":"990b7a72654399e7381999e2234f4b40be0ca16785ef6ba06890f1b31b515731"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.013268 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.015126 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" event={"ID":"51db22df-3d25-4c12-b104-eb3848940958","Type":"ContainerStarted","Data":"f7fe7b15c14b3db0a8226c1cec8c84eb8af81f6087cf1426e3966b1a32427b56"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.015200 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.016985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" event={"ID":"3550e1cd-642e-481c-b98f-b6d3770f51ca","Type":"ContainerStarted","Data":"f29900c5ea56b5c6e58c3d31f9b25907345b2d13cd4cb9da4a0ac38cacbc90c9"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.018867 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" event={"ID":"15c1d49b-c086-4c30-9a99-e0fb597dd76f","Type":"ContainerStarted","Data":"33d876af7a50c0608fbd3a2db0ab29ad0768dd098d562c637ad1610ff6cecade"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.019642 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.021386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" event={"ID":"e3f94965-fce3-4e35-9f97-5047e05dd50a","Type":"ContainerStarted","Data":"dbddf10a9a6b7304b9ef6683524de2cc3a50b4bbf6286548158bf305bfcb35b9"} Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.021581 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.027435 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" podStartSLOduration=3.611448848 podStartE2EDuration="30.027416193s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.003131759 +0000 UTC m=+953.453036540" lastFinishedPulling="2026-01-09 11:02:14.419099094 +0000 UTC m=+979.869003885" observedRunningTime="2026-01-09 11:02:15.020732198 +0000 UTC m=+980.470636999" watchObservedRunningTime="2026-01-09 11:02:15.027416193 +0000 UTC m=+980.477320984" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.037457 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2m6mz" podStartSLOduration=2.729889824 podStartE2EDuration="29.03744282s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.192018483 +0000 UTC m=+953.641923264" lastFinishedPulling="2026-01-09 11:02:14.499571479 +0000 UTC m=+979.949476260" observedRunningTime="2026-01-09 11:02:15.034753916 +0000 UTC m=+980.484658697" watchObservedRunningTime="2026-01-09 11:02:15.03744282 +0000 UTC m=+980.487347601" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.060197 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" podStartSLOduration=6.802890146 podStartE2EDuration="30.060172259s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.827912012 +0000 UTC m=+953.277816793" lastFinishedPulling="2026-01-09 11:02:11.085194125 +0000 UTC m=+976.535098906" observedRunningTime="2026-01-09 11:02:15.05657599 +0000 UTC m=+980.506480771" watchObservedRunningTime="2026-01-09 11:02:15.060172259 +0000 UTC m=+980.510077050" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.075821 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" podStartSLOduration=6.221994046 podStartE2EDuration="30.075788651s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.75585488 +0000 UTC m=+953.205759661" lastFinishedPulling="2026-01-09 11:02:11.609649495 +0000 UTC m=+977.059554266" observedRunningTime="2026-01-09 11:02:15.075575455 +0000 UTC m=+980.525480266" watchObservedRunningTime="2026-01-09 11:02:15.075788651 +0000 UTC m=+980.525693432" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.108574 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" podStartSLOduration=6.207488794 podStartE2EDuration="30.108554657s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.183241817 +0000 UTC m=+952.633146598" lastFinishedPulling="2026-01-09 11:02:11.08430768 +0000 UTC m=+976.534212461" observedRunningTime="2026-01-09 11:02:15.104852434 +0000 UTC m=+980.554757215" watchObservedRunningTime="2026-01-09 11:02:15.108554657 +0000 UTC m=+980.558459438" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.126976 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" podStartSLOduration=2.921819235 podStartE2EDuration="29.126954775s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.173447024 +0000 UTC m=+953.623351805" lastFinishedPulling="2026-01-09 11:02:14.378582554 +0000 UTC m=+979.828487345" observedRunningTime="2026-01-09 11:02:15.122947554 +0000 UTC m=+980.572852335" watchObservedRunningTime="2026-01-09 11:02:15.126954775 +0000 UTC m=+980.576859556" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.171059 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" podStartSLOduration=6.268866692 podStartE2EDuration="30.171035695s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.182906227 +0000 UTC m=+952.632811008" lastFinishedPulling="2026-01-09 11:02:11.08507523 +0000 UTC m=+976.534980011" observedRunningTime="2026-01-09 11:02:15.169552344 +0000 UTC m=+980.619457125" watchObservedRunningTime="2026-01-09 11:02:15.171035695 +0000 UTC m=+980.620940476" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.204442 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" podStartSLOduration=6.922750607 podStartE2EDuration="30.204424188s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.803129172 +0000 UTC m=+953.253033953" lastFinishedPulling="2026-01-09 11:02:11.084802743 +0000 UTC m=+976.534707534" observedRunningTime="2026-01-09 11:02:15.201762054 +0000 UTC m=+980.651666825" watchObservedRunningTime="2026-01-09 11:02:15.204424188 +0000 UTC m=+980.654328969" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.241329 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" podStartSLOduration=2.852279933 podStartE2EDuration="29.241303517s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.014069257 +0000 UTC m=+953.463974038" lastFinishedPulling="2026-01-09 11:02:14.403092841 +0000 UTC m=+979.852997622" observedRunningTime="2026-01-09 11:02:15.230081817 +0000 UTC m=+980.679986598" watchObservedRunningTime="2026-01-09 11:02:15.241303517 +0000 UTC m=+980.691208298" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.287111 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" podStartSLOduration=7.009990285 podStartE2EDuration="30.287091124s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.807333174 +0000 UTC m=+953.257237955" lastFinishedPulling="2026-01-09 11:02:11.084434013 +0000 UTC m=+976.534338794" observedRunningTime="2026-01-09 11:02:15.285851919 +0000 UTC m=+980.735756700" watchObservedRunningTime="2026-01-09 11:02:15.287091124 +0000 UTC m=+980.736995905" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.322414 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" podStartSLOduration=2.919980406 podStartE2EDuration="29.3223969s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.00077184 +0000 UTC m=+953.450676621" lastFinishedPulling="2026-01-09 11:02:14.403188294 +0000 UTC m=+979.853093115" observedRunningTime="2026-01-09 11:02:15.321816083 +0000 UTC m=+980.771720864" watchObservedRunningTime="2026-01-09 11:02:15.3223969 +0000 UTC m=+980.772301681" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.380910 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" podStartSLOduration=3.058557158 podStartE2EDuration="29.380891417s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.000825462 +0000 UTC m=+953.450730253" lastFinishedPulling="2026-01-09 11:02:14.323159721 +0000 UTC m=+979.773064512" observedRunningTime="2026-01-09 11:02:15.34591393 +0000 UTC m=+980.795818711" watchObservedRunningTime="2026-01-09 11:02:15.380891417 +0000 UTC m=+980.830796198" Jan 09 11:02:15 crc kubenswrapper[4727]: I0109 11:02:15.382741 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" podStartSLOduration=6.287938163 podStartE2EDuration="29.382735238s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.990345098 +0000 UTC m=+953.440249879" lastFinishedPulling="2026-01-09 11:02:11.085142173 +0000 UTC m=+976.535046954" observedRunningTime="2026-01-09 11:02:15.377171164 +0000 UTC m=+980.827075945" watchObservedRunningTime="2026-01-09 11:02:15.382735238 +0000 UTC m=+980.832640019" Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.036279 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" event={"ID":"9891b17e-81f9-4999-b489-db3e162c2a54","Type":"ContainerStarted","Data":"8f59f0c3e933c8f852e9647e86331578418d50c1827cb229b7c03afeea08d62c"} Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.038041 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.881693 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" podStartSLOduration=3.905260009 podStartE2EDuration="32.881672082s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.725201611 +0000 UTC m=+953.175106392" lastFinishedPulling="2026-01-09 11:02:16.701613684 +0000 UTC m=+982.151518465" observedRunningTime="2026-01-09 11:02:17.067594133 +0000 UTC m=+982.517498914" watchObservedRunningTime="2026-01-09 11:02:17.881672082 +0000 UTC m=+983.331576873" Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.897489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:17 crc kubenswrapper[4727]: I0109 11:02:17.908065 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24886819-7c1f-4b1f-880e-4b2102e302c1-cert\") pod \"infra-operator-controller-manager-6d99759cf-qpmcd\" (UID: \"24886819-7c1f-4b1f-880e-4b2102e302c1\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.048586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" event={"ID":"3550e1cd-642e-481c-b98f-b6d3770f51ca","Type":"ContainerStarted","Data":"176b427ab1bbe503ec8d4f662bacd76c4d8b2733cf8ed78cc4af43c0b1998af1"} Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.050170 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.067071 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t6tcr" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.075729 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.089026 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" podStartSLOduration=29.061703673 podStartE2EDuration="32.089007215s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:02:14.624128053 +0000 UTC m=+980.074032834" lastFinishedPulling="2026-01-09 11:02:17.651431595 +0000 UTC m=+983.101336376" observedRunningTime="2026-01-09 11:02:18.08883797 +0000 UTC m=+983.538742751" watchObservedRunningTime="2026-01-09 11:02:18.089007215 +0000 UTC m=+983.538911996" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.539295 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd"] Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.810594 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.810644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.818005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-webhook-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:18 crc kubenswrapper[4727]: I0109 11:02:18.818180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a33b307-e521-43c4-8e35-3e9d7d553716-metrics-certs\") pod \"openstack-operator-controller-manager-7db9fd4464-5h9ft\" (UID: \"6a33b307-e521-43c4-8e35-3e9d7d553716\") " pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.004250 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gggbj" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.013192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.056288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" event={"ID":"e604d4a1-bf95-49df-a854-b15337b7fae7","Type":"ContainerStarted","Data":"81771aa716668ef5ba88db6676231eb7e72ec8697e6f08cf9a9a61793ff0dbb2"} Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.057431 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.062392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" event={"ID":"24886819-7c1f-4b1f-880e-4b2102e302c1","Type":"ContainerStarted","Data":"18f78ea8379c2449ff62c5cd9a9a4de60691782579634f6457ddd88f7c34be6d"} Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.477319 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" podStartSLOduration=3.783541357 podStartE2EDuration="34.477298019s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.823013409 +0000 UTC m=+953.272918190" lastFinishedPulling="2026-01-09 11:02:18.516770081 +0000 UTC m=+983.966674852" observedRunningTime="2026-01-09 11:02:19.08528512 +0000 UTC m=+984.535189921" watchObservedRunningTime="2026-01-09 11:02:19.477298019 +0000 UTC m=+984.927202800" Jan 09 11:02:19 crc kubenswrapper[4727]: I0109 11:02:19.478868 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft"] Jan 09 11:02:20 crc kubenswrapper[4727]: I0109 11:02:20.071900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" event={"ID":"6a33b307-e521-43c4-8e35-3e9d7d553716","Type":"ContainerStarted","Data":"0b429df8ed511c16a0f1a349427cefc68ef2a4ab2fa575b4f95209112ea894c0"} Jan 09 11:02:20 crc kubenswrapper[4727]: I0109 11:02:20.072244 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" event={"ID":"6a33b307-e521-43c4-8e35-3e9d7d553716","Type":"ContainerStarted","Data":"5da208dc0f252da6a2e9ae95b7b97d7889578afb6cd4fe5ef253add35b9455d5"} Jan 09 11:02:20 crc kubenswrapper[4727]: I0109 11:02:20.110248 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" podStartSLOduration=34.110231239 podStartE2EDuration="34.110231239s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:02:20.108036409 +0000 UTC m=+985.557941180" watchObservedRunningTime="2026-01-09 11:02:20.110231239 +0000 UTC m=+985.560136010" Jan 09 11:02:21 crc kubenswrapper[4727]: I0109 11:02:21.080220 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.087140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" event={"ID":"ba0be6cc-1e31-4421-aa33-1e2514069376","Type":"ContainerStarted","Data":"6b5bcce0a79d6a4f3d562697da3f385fee39658fd68d725280dd781bdacd850c"} Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.087814 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.088422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" event={"ID":"e4480343-1920-4926-8668-e47e5bbfb646","Type":"ContainerStarted","Data":"f1a372193e9da56d2fdbf199a6a845da49f4056b4caa86ce6b07e3f746f334a7"} Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.088653 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.093707 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" event={"ID":"24886819-7c1f-4b1f-880e-4b2102e302c1","Type":"ContainerStarted","Data":"19c30ade5dff3793b2521850d69aab19dc0234e46fbace8af746c2681c61b9ba"} Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.093767 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.097622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" event={"ID":"558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6","Type":"ContainerStarted","Data":"351434f3caec0b44b429eb306e6ee454c84aba995141914e002a152dc3c541fd"} Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.098157 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.118262 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" podStartSLOduration=3.240084216 podStartE2EDuration="36.11824583s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:48.137577292 +0000 UTC m=+953.587482073" lastFinishedPulling="2026-01-09 11:02:21.015738906 +0000 UTC m=+986.465643687" observedRunningTime="2026-01-09 11:02:22.108015327 +0000 UTC m=+987.557920108" watchObservedRunningTime="2026-01-09 11:02:22.11824583 +0000 UTC m=+987.568150611" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.146905 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" podStartSLOduration=3.693131075 podStartE2EDuration="37.146887802s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.835128342 +0000 UTC m=+953.285033123" lastFinishedPulling="2026-01-09 11:02:21.288885069 +0000 UTC m=+986.738789850" observedRunningTime="2026-01-09 11:02:22.137114452 +0000 UTC m=+987.587019253" watchObservedRunningTime="2026-01-09 11:02:22.146887802 +0000 UTC m=+987.596792583" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.177414 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" podStartSLOduration=3.151735398 podStartE2EDuration="36.177392656s" podCreationTimestamp="2026-01-09 11:01:46 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.988764002 +0000 UTC m=+953.438668783" lastFinishedPulling="2026-01-09 11:02:21.01442126 +0000 UTC m=+986.464326041" observedRunningTime="2026-01-09 11:02:22.171906384 +0000 UTC m=+987.621811155" watchObservedRunningTime="2026-01-09 11:02:22.177392656 +0000 UTC m=+987.627297437" Jan 09 11:02:22 crc kubenswrapper[4727]: I0109 11:02:22.207911 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" podStartSLOduration=34.736313981 podStartE2EDuration="37.207888728s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:02:18.54310356 +0000 UTC m=+983.993008351" lastFinishedPulling="2026-01-09 11:02:21.014678317 +0000 UTC m=+986.464583098" observedRunningTime="2026-01-09 11:02:22.202196451 +0000 UTC m=+987.652101232" watchObservedRunningTime="2026-01-09 11:02:22.207888728 +0000 UTC m=+987.657793509" Jan 09 11:02:23 crc kubenswrapper[4727]: I0109 11:02:23.108420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" event={"ID":"9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb","Type":"ContainerStarted","Data":"a711ad6bab6c54a737576f72d1ec1085cb6c7f771cb444a53b455050e8c716d9"} Jan 09 11:02:23 crc kubenswrapper[4727]: I0109 11:02:23.130187 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" podStartSLOduration=4.05409574 podStartE2EDuration="38.130153448s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.810107245 +0000 UTC m=+953.260012016" lastFinishedPulling="2026-01-09 11:02:21.886164933 +0000 UTC m=+987.336069724" observedRunningTime="2026-01-09 11:02:23.127592447 +0000 UTC m=+988.577497288" watchObservedRunningTime="2026-01-09 11:02:23.130153448 +0000 UTC m=+988.580058299" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.100426 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-nd7lx" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.111156 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-l25ck" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.169433 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-w5c7d" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.186066 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-s49vr" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.234103 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-nxc7n" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.300901 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g5ckd" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.423247 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-4dv6h" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.440120 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-l4fld" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.520560 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q8wx7" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.529489 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.573168 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-pnk72" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.659834 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-6gtz5" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.758409 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-cc8k9" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.876578 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-jvkn5" Jan 09 11:02:26 crc kubenswrapper[4727]: I0109 11:02:26.999791 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-gkkm4" Jan 09 11:02:27 crc kubenswrapper[4727]: I0109 11:02:27.040875 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-vgcgj" Jan 09 11:02:27 crc kubenswrapper[4727]: I0109 11:02:27.060209 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-x4r9z" Jan 09 11:02:27 crc kubenswrapper[4727]: I0109 11:02:27.143422 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-m8s9d" Jan 09 11:02:28 crc kubenswrapper[4727]: I0109 11:02:28.083779 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-qpmcd" Jan 09 11:02:29 crc kubenswrapper[4727]: I0109 11:02:29.022039 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7db9fd4464-5h9ft" Jan 09 11:02:32 crc kubenswrapper[4727]: I0109 11:02:32.599048 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh" Jan 09 11:02:36 crc kubenswrapper[4727]: I0109 11:02:36.532726 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-69kx5" Jan 09 11:02:37 crc kubenswrapper[4727]: I0109 11:02:37.255604 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" event={"ID":"6040cced-684e-4521-9c4e-1debba9d5320","Type":"ContainerStarted","Data":"3f62e299c7603dd3e8592f12f4010be57384773e8b59fd1fcab1aeebc6ae6723"} Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.269252 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.286820 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" podStartSLOduration=12.611787383 podStartE2EDuration="54.286795938s" podCreationTimestamp="2026-01-09 11:01:45 +0000 UTC" firstStartedPulling="2026-01-09 11:01:47.736493618 +0000 UTC m=+953.186398399" lastFinishedPulling="2026-01-09 11:02:29.411502163 +0000 UTC m=+994.861406954" observedRunningTime="2026-01-09 11:02:39.28358796 +0000 UTC m=+1004.733492771" watchObservedRunningTime="2026-01-09 11:02:39.286795938 +0000 UTC m=+1004.736700719" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.404660 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.404726 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.404783 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.405591 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:02:39 crc kubenswrapper[4727]: I0109 11:02:39.405665 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639" gracePeriod=600 Jan 09 11:02:43 crc kubenswrapper[4727]: I0109 11:02:43.304542 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639" exitCode=0 Jan 09 11:02:43 crc kubenswrapper[4727]: I0109 11:02:43.304890 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639"} Jan 09 11:02:43 crc kubenswrapper[4727]: I0109 11:02:43.304940 4727 scope.go:117] "RemoveContainer" containerID="0b9b572f48a2b0167ef6ce08d287d773104c2b1c63269de815a8246087560cc3" Jan 09 11:02:44 crc kubenswrapper[4727]: I0109 11:02:44.314420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c"} Jan 09 11:02:46 crc kubenswrapper[4727]: I0109 11:02:46.305399 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-4nzmw" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.669248 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.671190 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.678855 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.678913 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-khtmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.678920 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.679027 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.705852 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.739539 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.739652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.753697 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.755364 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.765827 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.810365 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842203 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842309 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842336 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.842366 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.843348 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.894651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") pod \"dnsmasq-dns-675f4bcbfc-bwls8\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.944163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.944207 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.944243 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.945632 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.946157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:03 crc kubenswrapper[4727]: I0109 11:03:03.966787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") pod \"dnsmasq-dns-78dd6ddcc-k9rmq\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.007117 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.075640 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.400890 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:04 crc kubenswrapper[4727]: W0109 11:03:04.404866 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4792247f_ae97_41bf_955e_9b16eea098e2.slice/crio-3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839 WatchSource:0}: Error finding container 3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839: Status 404 returned error can't find the container with id 3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839 Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.408234 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.496179 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" event={"ID":"4792247f-ae97-41bf-955e-9b16eea098e2","Type":"ContainerStarted","Data":"3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839"} Jan 09 11:03:04 crc kubenswrapper[4727]: I0109 11:03:04.508827 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:05 crc kubenswrapper[4727]: I0109 11:03:05.507336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" event={"ID":"998815fa-e774-44a2-ade3-1409ceee0b03","Type":"ContainerStarted","Data":"4c6ac55e742436a968c5cf0430e0e38d89af0c0bf3ea5c9361ba02a939fdb8f2"} Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.407510 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.439470 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.440951 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.454214 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.500864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.501029 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.501092 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.603128 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.603260 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.604478 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.603298 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.608154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.647867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") pod \"dnsmasq-dns-666b6646f7-pdq66\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.733032 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.763023 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.775432 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.776992 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.859698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.911574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.912127 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:06 crc kubenswrapper[4727]: I0109 11:03:06.912184 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.016644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.016773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.016854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.018073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.019121 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.061748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") pod \"dnsmasq-dns-57d769cc4f-6r876\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.206593 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.296487 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:07 crc kubenswrapper[4727]: W0109 11:03:07.316300 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd88b93c8_236e_4b94_bd57_1e0259dd748e.slice/crio-9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240 WatchSource:0}: Error finding container 9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240: Status 404 returned error can't find the container with id 9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240 Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.540632 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerStarted","Data":"9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240"} Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.586903 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.588330 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.590656 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591037 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xx2j9" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591071 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591198 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591202 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.591353 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.602616 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.614529 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.629796 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.629910 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.629965 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630184 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630271 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630292 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.630452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.738409 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.740887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.740934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.740974 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741030 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741337 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.741368 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.743287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.744356 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.745335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.745783 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.746474 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.746792 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.755413 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.757676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.759344 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.763277 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: W0109 11:03:07.763309 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a8626c4_f062_47b5_b8f6_f83b93195735.slice/crio-96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f WatchSource:0}: Error finding container 96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f: Status 404 returned error can't find the container with id 96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.782463 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.804845 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.918792 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.944232 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.946052 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.948754 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.949004 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.949118 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.952657 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.953019 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.953268 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-j7rc6" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.954502 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 09 11:03:07 crc kubenswrapper[4727]: I0109 11:03:07.966065 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050036 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050125 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050162 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050187 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050258 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050319 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050401 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050493 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.050668 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154317 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154374 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154401 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154432 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.154634 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.155523 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157316 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157384 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157711 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.157726 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.163112 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.167488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.171482 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.178300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.185265 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.193639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.286624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.568348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" event={"ID":"8a8626c4-f062-47b5-b8f6-f83b93195735","Type":"ContainerStarted","Data":"96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f"} Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.589993 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:03:08 crc kubenswrapper[4727]: I0109 11:03:08.896153 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:03:08 crc kubenswrapper[4727]: W0109 11:03:08.943589 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a6a64ec_e743_4fa7_8e3e_5f628ebeea60.slice/crio-db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75 WatchSource:0}: Error finding container db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75: Status 404 returned error can't find the container with id db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75 Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.232847 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.234641 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.246679 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.247070 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.247344 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.249021 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zwcdt" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.251900 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.257158 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336564 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-default\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336665 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336878 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.336972 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwmk\" (UniqueName: \"kubernetes.io/projected/398bfc2d-be02-491c-af23-69fc4fc24817-kube-api-access-njwmk\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.337056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.337140 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-kolla-config\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438864 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-default\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.438980 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.439006 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwmk\" (UniqueName: \"kubernetes.io/projected/398bfc2d-be02-491c-af23-69fc4fc24817-kube-api-access-njwmk\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.439069 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.439111 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-kolla-config\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.440727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-default\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.439701 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.441650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-operator-scripts\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.441896 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/398bfc2d-be02-491c-af23-69fc4fc24817-kolla-config\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.442242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/398bfc2d-be02-491c-af23-69fc4fc24817-config-data-generated\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.465054 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwmk\" (UniqueName: \"kubernetes.io/projected/398bfc2d-be02-491c-af23-69fc4fc24817-kube-api-access-njwmk\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.471283 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.471655 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/398bfc2d-be02-491c-af23-69fc4fc24817-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.481813 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"398bfc2d-be02-491c-af23-69fc4fc24817\") " pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.574077 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.632156 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerStarted","Data":"db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75"} Jan 09 11:03:09 crc kubenswrapper[4727]: I0109 11:03:09.634942 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerStarted","Data":"992da0c7f6705ab24fafadc1d428d6d6e4d619876e23e4c5406d83cc5794cf74"} Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.586423 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.588032 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.594660 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.595062 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gk6mh" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.595152 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.595348 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.630618 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.681899 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682004 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682050 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77cb5\" (UniqueName: \"kubernetes.io/projected/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kube-api-access-77cb5\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682091 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682142 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.682163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785400 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785561 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785629 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77cb5\" (UniqueName: \"kubernetes.io/projected/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kube-api-access-77cb5\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785743 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.785775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.786229 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.786345 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.787905 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.787948 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.797443 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.799396 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.799743 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.811660 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77cb5\" (UniqueName: \"kubernetes.io/projected/e90a87ab-2df7-4a4a-8854-6daf3322e3d1-kube-api-access-77cb5\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.816956 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e90a87ab-2df7-4a4a-8854-6daf3322e3d1\") " pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.895108 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.896842 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.906145 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.906737 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fvrvm" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.907746 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.913929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.919229 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.989726 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.989831 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-kolla-config\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.989854 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-config-data\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.989945 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74gk\" (UniqueName: \"kubernetes.io/projected/0e6e8606-58f3-4640-939b-afa25ce1ce03-kube-api-access-m74gk\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:10 crc kubenswrapper[4727]: I0109 11:03:10.990008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.091767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.091942 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.091985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-kolla-config\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.092007 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-config-data\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.092028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74gk\" (UniqueName: \"kubernetes.io/projected/0e6e8606-58f3-4640-939b-afa25ce1ce03-kube-api-access-m74gk\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.095429 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-kolla-config\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.096140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6e8606-58f3-4640-939b-afa25ce1ce03-config-data\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.096416 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.114984 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e6e8606-58f3-4640-939b-afa25ce1ce03-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.123069 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74gk\" (UniqueName: \"kubernetes.io/projected/0e6e8606-58f3-4640-939b-afa25ce1ce03-kube-api-access-m74gk\") pod \"memcached-0\" (UID: \"0e6e8606-58f3-4640-939b-afa25ce1ce03\") " pod="openstack/memcached-0" Jan 09 11:03:11 crc kubenswrapper[4727]: I0109 11:03:11.219378 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 09 11:03:12 crc kubenswrapper[4727]: I0109 11:03:12.984391 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:03:12 crc kubenswrapper[4727]: I0109 11:03:12.988746 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:03:12 crc kubenswrapper[4727]: I0109 11:03:12.992200 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cpqd8" Jan 09 11:03:12 crc kubenswrapper[4727]: I0109 11:03:12.997093 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:03:13 crc kubenswrapper[4727]: I0109 11:03:13.025106 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") pod \"kube-state-metrics-0\" (UID: \"26965ac2-3dab-452c-8a34-83eadab4b929\") " pod="openstack/kube-state-metrics-0" Jan 09 11:03:13 crc kubenswrapper[4727]: I0109 11:03:13.127536 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") pod \"kube-state-metrics-0\" (UID: \"26965ac2-3dab-452c-8a34-83eadab4b929\") " pod="openstack/kube-state-metrics-0" Jan 09 11:03:13 crc kubenswrapper[4727]: I0109 11:03:13.145871 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") pod \"kube-state-metrics-0\" (UID: \"26965ac2-3dab-452c-8a34-83eadab4b929\") " pod="openstack/kube-state-metrics-0" Jan 09 11:03:13 crc kubenswrapper[4727]: I0109 11:03:13.313713 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.975763 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwrp2"] Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.978027 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.981132 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tvwgr" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.981602 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.981600 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 09 11:03:15 crc kubenswrapper[4727]: I0109 11:03:15.990221 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2"] Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.026462 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-wxljq"] Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.028142 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.038771 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wxljq"] Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088423 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088486 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-ovn-controller-tls-certs\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088531 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpdc8\" (UniqueName: \"kubernetes.io/projected/d81594ff-04f5-47c2-9620-db583609e9aa-kube-api-access-qpdc8\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088563 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-etc-ovs\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088582 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-log-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088610 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-lib\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088628 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088717 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-combined-ca-bundle\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088800 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdwkr\" (UniqueName: \"kubernetes.io/projected/bdf6d307-98f2-40a7-8b6c-c149789150ef-kube-api-access-wdwkr\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088835 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d81594ff-04f5-47c2-9620-db583609e9aa-scripts\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-run\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088900 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-log\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.088915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf6d307-98f2-40a7-8b6c-c149789150ef-scripts\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.190748 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d81594ff-04f5-47c2-9620-db583609e9aa-scripts\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.190851 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-run\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.190915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-log\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.190946 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf6d307-98f2-40a7-8b6c-c149789150ef-scripts\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-ovn-controller-tls-certs\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191080 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpdc8\" (UniqueName: \"kubernetes.io/projected/d81594ff-04f5-47c2-9620-db583609e9aa-kube-api-access-qpdc8\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-etc-ovs\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191140 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-log-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191184 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-lib\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191236 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-combined-ca-bundle\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdwkr\" (UniqueName: \"kubernetes.io/projected/bdf6d307-98f2-40a7-8b6c-c149789150ef-kube-api-access-wdwkr\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-run\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-lib\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.191988 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-run-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.192018 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d81594ff-04f5-47c2-9620-db583609e9aa-var-log-ovn\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.192156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-etc-ovs\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.192292 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bdf6d307-98f2-40a7-8b6c-c149789150ef-var-log\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.193428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d81594ff-04f5-47c2-9620-db583609e9aa-scripts\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.194735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdf6d307-98f2-40a7-8b6c-c149789150ef-scripts\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.200110 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-ovn-controller-tls-certs\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.206011 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81594ff-04f5-47c2-9620-db583609e9aa-combined-ca-bundle\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.211326 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpdc8\" (UniqueName: \"kubernetes.io/projected/d81594ff-04f5-47c2-9620-db583609e9aa-kube-api-access-qpdc8\") pod \"ovn-controller-mwrp2\" (UID: \"d81594ff-04f5-47c2-9620-db583609e9aa\") " pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.225437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdwkr\" (UniqueName: \"kubernetes.io/projected/bdf6d307-98f2-40a7-8b6c-c149789150ef-kube-api-access-wdwkr\") pod \"ovn-controller-ovs-wxljq\" (UID: \"bdf6d307-98f2-40a7-8b6c-c149789150ef\") " pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.296903 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.345799 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.930916 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.933624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.936491 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.936735 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.936894 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pqthl" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.937046 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.937172 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 09 11:03:16 crc kubenswrapper[4727]: I0109 11:03:16.952115 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114142 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114196 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114270 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114293 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114320 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8fk\" (UniqueName: \"kubernetes.io/projected/2e25e0da-05c1-4d2e-8e27-c795be192a77-kube-api-access-rt8fk\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114348 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114392 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.114411 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216471 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8fk\" (UniqueName: \"kubernetes.io/projected/2e25e0da-05c1-4d2e-8e27-c795be192a77-kube-api-access-rt8fk\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216557 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216605 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.216724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.217592 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.218369 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.219386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.219824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e25e0da-05c1-4d2e-8e27-c795be192a77-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.223024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.224169 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.224468 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e25e0da-05c1-4d2e-8e27-c795be192a77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.239876 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8fk\" (UniqueName: \"kubernetes.io/projected/2e25e0da-05c1-4d2e-8e27-c795be192a77-kube-api-access-rt8fk\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.246888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e25e0da-05c1-4d2e-8e27-c795be192a77\") " pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:17 crc kubenswrapper[4727]: I0109 11:03:17.258556 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.412871 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.415224 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.418530 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.418587 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.418902 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.419080 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9m8qm" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.434199 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582279 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582406 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582483 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582535 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582645 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhzkh\" (UniqueName: \"kubernetes.io/projected/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-kube-api-access-hhzkh\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582700 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.582740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.684890 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.684988 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685082 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685134 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685184 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685249 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhzkh\" (UniqueName: \"kubernetes.io/projected/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-kube-api-access-hhzkh\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.685498 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.686107 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.686527 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.686683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.702201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.710204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.713161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.720979 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhzkh\" (UniqueName: \"kubernetes.io/projected/4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8-kube-api-access-hhzkh\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.737825 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8\") " pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:20 crc kubenswrapper[4727]: I0109 11:03:20.755657 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.355889 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.356977 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvvnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bwls8_openstack(998815fa-e774-44a2-ade3-1409ceee0b03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.358206 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" podUID="998815fa-e774-44a2-ade3-1409ceee0b03" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.409582 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.409844 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-k9rmq_openstack(4792247f-ae97-41bf-955e-9b16eea098e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.411036 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" podUID="4792247f-ae97-41bf-955e-9b16eea098e2" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.426045 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.426197 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfc9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-pdq66_openstack(d88b93c8-236e-4b94-bd57-1e0259dd748e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.427490 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.446912 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.447072 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc4qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-6r876_openstack(8a8626c4-f062-47b5-b8f6-f83b93195735): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.448853 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" podUID="8a8626c4-f062-47b5-b8f6-f83b93195735" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.818620 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" podUID="8a8626c4-f062-47b5-b8f6-f83b93195735" Jan 09 11:03:29 crc kubenswrapper[4727]: E0109 11:03:29.818682 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" Jan 09 11:03:29 crc kubenswrapper[4727]: I0109 11:03:29.843229 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 09 11:03:29 crc kubenswrapper[4727]: W0109 11:03:29.995655 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod398bfc2d_be02_491c_af23_69fc4fc24817.slice/crio-771c3204d72297021d04bc3b2cbb8b5659d99bbada111036945f060a67db31b7 WatchSource:0}: Error finding container 771c3204d72297021d04bc3b2cbb8b5659d99bbada111036945f060a67db31b7: Status 404 returned error can't find the container with id 771c3204d72297021d04bc3b2cbb8b5659d99bbada111036945f060a67db31b7 Jan 09 11:03:29 crc kubenswrapper[4727]: I0109 11:03:29.999104 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 11:03:30 crc kubenswrapper[4727]: W0109 11:03:30.001348 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26965ac2_3dab_452c_8a34_83eadab4b929.slice/crio-049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc WatchSource:0}: Error finding container 049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc: Status 404 returned error can't find the container with id 049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.013388 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.020310 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.374727 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.472965 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 11:03:30 crc kubenswrapper[4727]: W0109 11:03:30.644801 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd81594ff_04f5_47c2_9620_db583609e9aa.slice/crio-d5458e9fc2cc2a040bf095c49582af03589262f8f2aff543aa4ce82137842fc6 WatchSource:0}: Error finding container d5458e9fc2cc2a040bf095c49582af03589262f8f2aff543aa4ce82137842fc6: Status 404 returned error can't find the container with id d5458e9fc2cc2a040bf095c49582af03589262f8f2aff543aa4ce82137842fc6 Jan 09 11:03:30 crc kubenswrapper[4727]: W0109 11:03:30.649760 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e25e0da_05c1_4d2e_8e27_c795be192a77.slice/crio-569f504ea787fe3f3efbff64a110abf76420f6be5c330d57170345ac31438818 WatchSource:0}: Error finding container 569f504ea787fe3f3efbff64a110abf76420f6be5c330d57170345ac31438818: Status 404 returned error can't find the container with id 569f504ea787fe3f3efbff64a110abf76420f6be5c330d57170345ac31438818 Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.710757 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.717850 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.787555 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") pod \"998815fa-e774-44a2-ade3-1409ceee0b03\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.787712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") pod \"998815fa-e774-44a2-ade3-1409ceee0b03\" (UID: \"998815fa-e774-44a2-ade3-1409ceee0b03\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.788452 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config" (OuterVolumeSpecName: "config") pod "998815fa-e774-44a2-ade3-1409ceee0b03" (UID: "998815fa-e774-44a2-ade3-1409ceee0b03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.795418 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh" (OuterVolumeSpecName: "kube-api-access-kvvnh") pod "998815fa-e774-44a2-ade3-1409ceee0b03" (UID: "998815fa-e774-44a2-ade3-1409ceee0b03"). InnerVolumeSpecName "kube-api-access-kvvnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.826950 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerStarted","Data":"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.829784 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26965ac2-3dab-452c-8a34-83eadab4b929","Type":"ContainerStarted","Data":"049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.832599 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e25e0da-05c1-4d2e-8e27-c795be192a77","Type":"ContainerStarted","Data":"569f504ea787fe3f3efbff64a110abf76420f6be5c330d57170345ac31438818"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.835192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" event={"ID":"998815fa-e774-44a2-ade3-1409ceee0b03","Type":"ContainerDied","Data":"4c6ac55e742436a968c5cf0430e0e38d89af0c0bf3ea5c9361ba02a939fdb8f2"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.835332 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwls8" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.836408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0e6e8606-58f3-4640-939b-afa25ce1ce03","Type":"ContainerStarted","Data":"c28a596d903243891199eda04e647131ee1feb3b54a523f604ee927a4279baab"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.838649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e90a87ab-2df7-4a4a-8854-6daf3322e3d1","Type":"ContainerStarted","Data":"7a7eb66b883e9a0a66bca284dd6fdfd311fc36e281015acef7c1c8b13abe892e"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.841001 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerStarted","Data":"4e6882c4f32dec9e5098ba742e2c34d151d018e9f63b15aa14f663a278aa1af0"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.843798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398bfc2d-be02-491c-af23-69fc4fc24817","Type":"ContainerStarted","Data":"771c3204d72297021d04bc3b2cbb8b5659d99bbada111036945f060a67db31b7"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.845975 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" event={"ID":"4792247f-ae97-41bf-955e-9b16eea098e2","Type":"ContainerDied","Data":"3185333f5d6616a5bc50c8ef2e4334af302a94ec1d0026567cac26e93cc2a839"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.846025 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9rmq" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.847666 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2" event={"ID":"d81594ff-04f5-47c2-9620-db583609e9aa","Type":"ContainerStarted","Data":"d5458e9fc2cc2a040bf095c49582af03589262f8f2aff543aa4ce82137842fc6"} Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.889468 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") pod \"4792247f-ae97-41bf-955e-9b16eea098e2\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.889630 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") pod \"4792247f-ae97-41bf-955e-9b16eea098e2\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.889826 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") pod \"4792247f-ae97-41bf-955e-9b16eea098e2\" (UID: \"4792247f-ae97-41bf-955e-9b16eea098e2\") " Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.890311 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvvnh\" (UniqueName: \"kubernetes.io/projected/998815fa-e774-44a2-ade3-1409ceee0b03-kube-api-access-kvvnh\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.890339 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998815fa-e774-44a2-ade3-1409ceee0b03-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.890755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config" (OuterVolumeSpecName: "config") pod "4792247f-ae97-41bf-955e-9b16eea098e2" (UID: "4792247f-ae97-41bf-955e-9b16eea098e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.890843 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4792247f-ae97-41bf-955e-9b16eea098e2" (UID: "4792247f-ae97-41bf-955e-9b16eea098e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.898563 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns" (OuterVolumeSpecName: "kube-api-access-dvgns") pod "4792247f-ae97-41bf-955e-9b16eea098e2" (UID: "4792247f-ae97-41bf-955e-9b16eea098e2"). InnerVolumeSpecName "kube-api-access-dvgns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.969615 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.980972 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwls8"] Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.998409 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.998464 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4792247f-ae97-41bf-955e-9b16eea098e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:30 crc kubenswrapper[4727]: I0109 11:03:30.998479 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvgns\" (UniqueName: \"kubernetes.io/projected/4792247f-ae97-41bf-955e-9b16eea098e2-kube-api-access-dvgns\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.074211 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.215286 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.221178 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9rmq"] Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.502585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wxljq"] Jan 09 11:03:31 crc kubenswrapper[4727]: W0109 11:03:31.737563 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a92393f_3fc8_4570_9e2f_b3aed9ce9bb8.slice/crio-8b3ae7648e8750bad9012edbc5fd93a394181871ff982145bc8c736807e434ee WatchSource:0}: Error finding container 8b3ae7648e8750bad9012edbc5fd93a394181871ff982145bc8c736807e434ee: Status 404 returned error can't find the container with id 8b3ae7648e8750bad9012edbc5fd93a394181871ff982145bc8c736807e434ee Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.858424 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerStarted","Data":"e91831c33a7ef81519243790ea5b18c65641460da17a60921162046cdb477acb"} Jan 09 11:03:31 crc kubenswrapper[4727]: I0109 11:03:31.860376 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8","Type":"ContainerStarted","Data":"8b3ae7648e8750bad9012edbc5fd93a394181871ff982145bc8c736807e434ee"} Jan 09 11:03:32 crc kubenswrapper[4727]: I0109 11:03:32.870055 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4792247f-ae97-41bf-955e-9b16eea098e2" path="/var/lib/kubelet/pods/4792247f-ae97-41bf-955e-9b16eea098e2/volumes" Jan 09 11:03:32 crc kubenswrapper[4727]: I0109 11:03:32.870746 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998815fa-e774-44a2-ade3-1409ceee0b03" path="/var/lib/kubelet/pods/998815fa-e774-44a2-ade3-1409ceee0b03/volumes" Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.931616 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e25e0da-05c1-4d2e-8e27-c795be192a77","Type":"ContainerStarted","Data":"e1769745eee41b35a446a104041934be22bb24b754f2896fc7c445fd568054e2"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.934848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2" event={"ID":"d81594ff-04f5-47c2-9620-db583609e9aa","Type":"ContainerStarted","Data":"004ec23cffea5ee515e7291ccd33b721de72dc39a2d9da6a1931ce3e71ff33db"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.935075 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mwrp2" Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.936742 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerStarted","Data":"e1e3de1959adc113296b88070d1b82314efcd2cf2979f4f0a11107c4e80f0470"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.941186 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0e6e8606-58f3-4640-939b-afa25ce1ce03","Type":"ContainerStarted","Data":"77fd03ab99813bf8ec1e830cd1b50448330e8ea8c1acdb09a9a2bb373218ca07"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.941352 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.942999 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26965ac2-3dab-452c-8a34-83eadab4b929","Type":"ContainerStarted","Data":"aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.943477 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.945557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8","Type":"ContainerStarted","Data":"dccee653b0e4ca3fc20dbc10644eb1a9b2f8f30642a17240aab9cec37d536871"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.947472 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e90a87ab-2df7-4a4a-8854-6daf3322e3d1","Type":"ContainerStarted","Data":"749206d3d963065c3cfd37c4274e1462377134e24d83298853087549af255b6b"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.949194 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398bfc2d-be02-491c-af23-69fc4fc24817","Type":"ContainerStarted","Data":"d71244c67d6c440004c9ba9762fdf69354f72c0b58f032567a7adfe6f9733a0c"} Jan 09 11:03:37 crc kubenswrapper[4727]: I0109 11:03:37.966729 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mwrp2" podStartSLOduration=16.245137619 podStartE2EDuration="22.966702706s" podCreationTimestamp="2026-01-09 11:03:15 +0000 UTC" firstStartedPulling="2026-01-09 11:03:30.651218504 +0000 UTC m=+1056.101123285" lastFinishedPulling="2026-01-09 11:03:37.372783591 +0000 UTC m=+1062.822688372" observedRunningTime="2026-01-09 11:03:37.95816868 +0000 UTC m=+1063.408073461" watchObservedRunningTime="2026-01-09 11:03:37.966702706 +0000 UTC m=+1063.416607497" Jan 09 11:03:38 crc kubenswrapper[4727]: I0109 11:03:38.072204 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.608249926 podStartE2EDuration="26.072101238s" podCreationTimestamp="2026-01-09 11:03:12 +0000 UTC" firstStartedPulling="2026-01-09 11:03:30.003114496 +0000 UTC m=+1055.453019277" lastFinishedPulling="2026-01-09 11:03:37.466965808 +0000 UTC m=+1062.916870589" observedRunningTime="2026-01-09 11:03:38.06822699 +0000 UTC m=+1063.518131781" watchObservedRunningTime="2026-01-09 11:03:38.072101238 +0000 UTC m=+1063.522006019" Jan 09 11:03:38 crc kubenswrapper[4727]: I0109 11:03:38.090849 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=26.151904875 podStartE2EDuration="28.090817422s" podCreationTimestamp="2026-01-09 11:03:10 +0000 UTC" firstStartedPulling="2026-01-09 11:03:29.832769178 +0000 UTC m=+1055.282673959" lastFinishedPulling="2026-01-09 11:03:31.771681735 +0000 UTC m=+1057.221586506" observedRunningTime="2026-01-09 11:03:38.090417392 +0000 UTC m=+1063.540322203" watchObservedRunningTime="2026-01-09 11:03:38.090817422 +0000 UTC m=+1063.540722203" Jan 09 11:03:38 crc kubenswrapper[4727]: I0109 11:03:38.966067 4727 generic.go:334] "Generic (PLEG): container finished" podID="bdf6d307-98f2-40a7-8b6c-c149789150ef" containerID="e1e3de1959adc113296b88070d1b82314efcd2cf2979f4f0a11107c4e80f0470" exitCode=0 Jan 09 11:03:38 crc kubenswrapper[4727]: I0109 11:03:38.966283 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerDied","Data":"e1e3de1959adc113296b88070d1b82314efcd2cf2979f4f0a11107c4e80f0470"} Jan 09 11:03:39 crc kubenswrapper[4727]: I0109 11:03:39.977566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerStarted","Data":"9ef24e3a77bb83a46b565e29bfc907ae65d435ed7a5de1f688ae8c9dcb457a5c"} Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.987447 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e25e0da-05c1-4d2e-8e27-c795be192a77","Type":"ContainerStarted","Data":"90608d469bcea6c32f9b76c5aa0b01b635a995a6de7e929fed500e416e8d8fe6"} Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.992668 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wxljq" event={"ID":"bdf6d307-98f2-40a7-8b6c-c149789150ef","Type":"ContainerStarted","Data":"5de25bef9e2800edd8fe3384498eb106a5fa2fff29330d377e15ed57c1998c58"} Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.993372 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.993416 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:03:40 crc kubenswrapper[4727]: I0109 11:03:40.999339 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8","Type":"ContainerStarted","Data":"258e27329ff44bb1e17ff8596d3a60b380eaad82950f1b0fbe95791c83c6ef15"} Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.044784 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.22824275 podStartE2EDuration="26.044765519s" podCreationTimestamp="2026-01-09 11:03:15 +0000 UTC" firstStartedPulling="2026-01-09 11:03:30.652335781 +0000 UTC m=+1056.102240562" lastFinishedPulling="2026-01-09 11:03:40.46885855 +0000 UTC m=+1065.918763331" observedRunningTime="2026-01-09 11:03:41.042327067 +0000 UTC m=+1066.492231848" watchObservedRunningTime="2026-01-09 11:03:41.044765519 +0000 UTC m=+1066.494670300" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.126999 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-wxljq" podStartSLOduration=21.103752193 podStartE2EDuration="26.126970292s" podCreationTimestamp="2026-01-09 11:03:15 +0000 UTC" firstStartedPulling="2026-01-09 11:03:31.736839472 +0000 UTC m=+1057.186744253" lastFinishedPulling="2026-01-09 11:03:36.760057561 +0000 UTC m=+1062.209962352" observedRunningTime="2026-01-09 11:03:41.108410092 +0000 UTC m=+1066.558314873" watchObservedRunningTime="2026-01-09 11:03:41.126970292 +0000 UTC m=+1066.576875073" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.159415 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.437324448 podStartE2EDuration="22.159398215s" podCreationTimestamp="2026-01-09 11:03:19 +0000 UTC" firstStartedPulling="2026-01-09 11:03:31.765338724 +0000 UTC m=+1057.215243505" lastFinishedPulling="2026-01-09 11:03:40.487412501 +0000 UTC m=+1065.937317272" observedRunningTime="2026-01-09 11:03:41.1371051 +0000 UTC m=+1066.587009891" watchObservedRunningTime="2026-01-09 11:03:41.159398215 +0000 UTC m=+1066.609302996" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.258993 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.298942 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.757341 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:41 crc kubenswrapper[4727]: I0109 11:03:41.851055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.008895 4727 generic.go:334] "Generic (PLEG): container finished" podID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerID="58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc" exitCode=0 Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.009067 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerDied","Data":"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc"} Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.013974 4727 generic.go:334] "Generic (PLEG): container finished" podID="e90a87ab-2df7-4a4a-8854-6daf3322e3d1" containerID="749206d3d963065c3cfd37c4274e1462377134e24d83298853087549af255b6b" exitCode=0 Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.014201 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e90a87ab-2df7-4a4a-8854-6daf3322e3d1","Type":"ContainerDied","Data":"749206d3d963065c3cfd37c4274e1462377134e24d83298853087549af255b6b"} Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.017863 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398bfc2d-be02-491c-af23-69fc4fc24817","Type":"ContainerDied","Data":"d71244c67d6c440004c9ba9762fdf69354f72c0b58f032567a7adfe6f9733a0c"} Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.024660 4727 generic.go:334] "Generic (PLEG): container finished" podID="398bfc2d-be02-491c-af23-69fc4fc24817" containerID="d71244c67d6c440004c9ba9762fdf69354f72c0b58f032567a7adfe6f9733a0c" exitCode=0 Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.026728 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.028245 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.084634 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.096071 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.375906 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.442997 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.445274 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.448097 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.461101 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-p58fw"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.462536 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.469895 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.474095 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p58fw"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.490721 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.531800 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532279 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-combined-ca-bundle\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532426 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede60be2-7d1e-482a-b994-6c552d322575-config\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532545 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpxj\" (UniqueName: \"kubernetes.io/projected/ede60be2-7d1e-482a-b994-6c552d322575-kube-api-access-nvpxj\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532692 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovs-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.532910 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovn-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.533116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.628741 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.635875 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.635921 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.635953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.635990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-combined-ca-bundle\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636019 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede60be2-7d1e-482a-b994-6c552d322575-config\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpxj\" (UniqueName: \"kubernetes.io/projected/ede60be2-7d1e-482a-b994-6c552d322575-kube-api-access-nvpxj\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636094 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636114 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovs-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636137 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovn-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.636158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.637269 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.638951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovs-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.639030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ede60be2-7d1e-482a-b994-6c552d322575-ovn-rundir\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.639673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.639963 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.640228 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.640250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede60be2-7d1e-482a-b994-6c552d322575-config\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.641454 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.642382 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.642911 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.644009 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.644321 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.644447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-x2fhd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.645968 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede60be2-7d1e-482a-b994-6c552d322575-combined-ca-bundle\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.679261 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.682291 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpxj\" (UniqueName: \"kubernetes.io/projected/ede60be2-7d1e-482a-b994-6c552d322575-kube-api-access-nvpxj\") pod \"ovn-controller-metrics-p58fw\" (UID: \"ede60be2-7d1e-482a-b994-6c552d322575\") " pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.700871 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") pod \"dnsmasq-dns-7fd796d7df-s8759\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.715744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.717132 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.720158 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.739515 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740817 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740888 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740964 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.740991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb8n4\" (UniqueName: \"kubernetes.io/projected/5504697e-8969-45f2-92c6-3aba8688de1a-kube-api-access-nb8n4\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.741017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-scripts\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.741065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-config\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.786615 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.791083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p58fw" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-scripts\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845304 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-config\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845480 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845540 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.845640 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.846386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-config\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865397 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865684 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5504697e-8969-45f2-92c6-3aba8688de1a-scripts\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.865674 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb8n4\" (UniqueName: \"kubernetes.io/projected/5504697e-8969-45f2-92c6-3aba8688de1a-kube-api-access-nb8n4\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.867954 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.872033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.872735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.877545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5504697e-8969-45f2-92c6-3aba8688de1a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.895532 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb8n4\" (UniqueName: \"kubernetes.io/projected/5504697e-8969-45f2-92c6-3aba8688de1a-kube-api-access-nb8n4\") pod \"ovn-northd-0\" (UID: \"5504697e-8969-45f2-92c6-3aba8688de1a\") " pod="openstack/ovn-northd-0" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.921207 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.988387 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990806 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990951 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.990964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.991854 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.992114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:42 crc kubenswrapper[4727]: I0109 11:03:42.993392 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.031821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") pod \"dnsmasq-dns-86db49b7ff-shfxd\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.040850 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" event={"ID":"8a8626c4-f062-47b5-b8f6-f83b93195735","Type":"ContainerDied","Data":"96aad1c34dcf5db9e6cfedaf9e31ee9607c404bc8232c3631e07700ef00cf48f"} Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.040934 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6r876" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.043037 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e90a87ab-2df7-4a4a-8854-6daf3322e3d1","Type":"ContainerStarted","Data":"3e935267903d5c3555fc3eab5aa6d0d5b08d129dace48ea060558aed0d5213c7"} Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.088930 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.091639 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") pod \"8a8626c4-f062-47b5-b8f6-f83b93195735\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.092010 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") pod \"8a8626c4-f062-47b5-b8f6-f83b93195735\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.092322 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") pod \"8a8626c4-f062-47b5-b8f6-f83b93195735\" (UID: \"8a8626c4-f062-47b5-b8f6-f83b93195735\") " Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.097533 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.260646132 podStartE2EDuration="34.097502412s" podCreationTimestamp="2026-01-09 11:03:09 +0000 UTC" firstStartedPulling="2026-01-09 11:03:29.999306809 +0000 UTC m=+1055.449211590" lastFinishedPulling="2026-01-09 11:03:36.836163089 +0000 UTC m=+1062.286067870" observedRunningTime="2026-01-09 11:03:43.074988141 +0000 UTC m=+1068.524892932" watchObservedRunningTime="2026-01-09 11:03:43.097502412 +0000 UTC m=+1068.547407193" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.099265 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a8626c4-f062-47b5-b8f6-f83b93195735" (UID: "8a8626c4-f062-47b5-b8f6-f83b93195735"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.099406 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq" (OuterVolumeSpecName: "kube-api-access-wc4qq") pod "8a8626c4-f062-47b5-b8f6-f83b93195735" (UID: "8a8626c4-f062-47b5-b8f6-f83b93195735"). InnerVolumeSpecName "kube-api-access-wc4qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.102964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config" (OuterVolumeSpecName: "config") pod "8a8626c4-f062-47b5-b8f6-f83b93195735" (UID: "8a8626c4-f062-47b5-b8f6-f83b93195735"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.108277 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.195604 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc4qq\" (UniqueName: \"kubernetes.io/projected/8a8626c4-f062-47b5-b8f6-f83b93195735-kube-api-access-wc4qq\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.195645 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.195657 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a8626c4-f062-47b5-b8f6-f83b93195735-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.323605 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.418074 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p58fw"] Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.452255 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.469164 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.483474 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6r876"] Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.574048 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 09 11:03:43 crc kubenswrapper[4727]: W0109 11:03:43.581556 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5504697e_8969_45f2_92c6_3aba8688de1a.slice/crio-beb531e10c3d86d0bb48c5d4f67a7574a92e26cfcf76b5944e7e935e6cb4172e WatchSource:0}: Error finding container beb531e10c3d86d0bb48c5d4f67a7574a92e26cfcf76b5944e7e935e6cb4172e: Status 404 returned error can't find the container with id beb531e10c3d86d0bb48c5d4f67a7574a92e26cfcf76b5944e7e935e6cb4172e Jan 09 11:03:43 crc kubenswrapper[4727]: I0109 11:03:43.650845 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.068051 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"398bfc2d-be02-491c-af23-69fc4fc24817","Type":"ContainerStarted","Data":"519f8f5d5e7190352f37ebd7a547601e4ce345d0b63e6063379a577b0ca68c2c"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.070290 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerID="2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86" exitCode=0 Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.070450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerDied","Data":"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.070575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerStarted","Data":"90625e00836e35ec42870d8838b1bad64246fb7214b4d03011fe48a0e3903723"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.075172 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerStarted","Data":"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.075282 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.075294 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="dnsmasq-dns" containerID="cri-o://1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" gracePeriod=10 Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.079455 4727 generic.go:334] "Generic (PLEG): container finished" podID="9af0367c-139f-443d-9b2b-54908e88f39c" containerID="87def1a1e5b96c750eada21838e69b6f07dfd2503065dbb58dd428a9c0764731" exitCode=0 Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.079816 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerDied","Data":"87def1a1e5b96c750eada21838e69b6f07dfd2503065dbb58dd428a9c0764731"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.079919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerStarted","Data":"64f12a2e916ba0978736fe5fcb0ce8bed71a92aea02c9a9e4d93c6d88a07c4ec"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.084336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5504697e-8969-45f2-92c6-3aba8688de1a","Type":"ContainerStarted","Data":"beb531e10c3d86d0bb48c5d4f67a7574a92e26cfcf76b5944e7e935e6cb4172e"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.102711 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p58fw" event={"ID":"ede60be2-7d1e-482a-b994-6c552d322575","Type":"ContainerStarted","Data":"fd04a5259a42e7ccf2db63769c37b680ef294f6d19c1a3d3a3d60d891336297b"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.102770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p58fw" event={"ID":"ede60be2-7d1e-482a-b994-6c552d322575","Type":"ContainerStarted","Data":"8254e732664e97894563db5de08c5e1bf27ade4792397799ae16e934251edc03"} Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.120777 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.278665797 podStartE2EDuration="36.120754369s" podCreationTimestamp="2026-01-09 11:03:08 +0000 UTC" firstStartedPulling="2026-01-09 11:03:30.000227923 +0000 UTC m=+1055.450132714" lastFinishedPulling="2026-01-09 11:03:36.842316505 +0000 UTC m=+1062.292221286" observedRunningTime="2026-01-09 11:03:44.096591356 +0000 UTC m=+1069.546496137" watchObservedRunningTime="2026-01-09 11:03:44.120754369 +0000 UTC m=+1069.570659150" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.154359 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" podStartSLOduration=4.085978235 podStartE2EDuration="38.154296029s" podCreationTimestamp="2026-01-09 11:03:06 +0000 UTC" firstStartedPulling="2026-01-09 11:03:07.322210139 +0000 UTC m=+1032.772114920" lastFinishedPulling="2026-01-09 11:03:41.390527933 +0000 UTC m=+1066.840432714" observedRunningTime="2026-01-09 11:03:44.143128826 +0000 UTC m=+1069.593033607" watchObservedRunningTime="2026-01-09 11:03:44.154296029 +0000 UTC m=+1069.604200820" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.190122 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-p58fw" podStartSLOduration=2.190101336 podStartE2EDuration="2.190101336s" podCreationTimestamp="2026-01-09 11:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:03:44.188365043 +0000 UTC m=+1069.638269834" watchObservedRunningTime="2026-01-09 11:03:44.190101336 +0000 UTC m=+1069.640006137" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.632962 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.646925 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") pod \"d88b93c8-236e-4b94-bd57-1e0259dd748e\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.652540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") pod \"d88b93c8-236e-4b94-bd57-1e0259dd748e\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.652623 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") pod \"d88b93c8-236e-4b94-bd57-1e0259dd748e\" (UID: \"d88b93c8-236e-4b94-bd57-1e0259dd748e\") " Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.664274 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p" (OuterVolumeSpecName: "kube-api-access-tfc9p") pod "d88b93c8-236e-4b94-bd57-1e0259dd748e" (UID: "d88b93c8-236e-4b94-bd57-1e0259dd748e"). InnerVolumeSpecName "kube-api-access-tfc9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.703670 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d88b93c8-236e-4b94-bd57-1e0259dd748e" (UID: "d88b93c8-236e-4b94-bd57-1e0259dd748e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.733890 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config" (OuterVolumeSpecName: "config") pod "d88b93c8-236e-4b94-bd57-1e0259dd748e" (UID: "d88b93c8-236e-4b94-bd57-1e0259dd748e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.755944 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.755978 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfc9p\" (UniqueName: \"kubernetes.io/projected/d88b93c8-236e-4b94-bd57-1e0259dd748e-kube-api-access-tfc9p\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.756109 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88b93c8-236e-4b94-bd57-1e0259dd748e-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:44 crc kubenswrapper[4727]: I0109 11:03:44.871237 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a8626c4-f062-47b5-b8f6-f83b93195735" path="/var/lib/kubelet/pods/8a8626c4-f062-47b5-b8f6-f83b93195735/volumes" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.113116 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5504697e-8969-45f2-92c6-3aba8688de1a","Type":"ContainerStarted","Data":"ab66be79242c993625da55e2412401fde94d26b34f5a3f862a677921b506bf5f"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.113475 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5504697e-8969-45f2-92c6-3aba8688de1a","Type":"ContainerStarted","Data":"e80ab5a9c392c933470e5688d54439ed9d23fc14b01ea81f8dfd12319f0d8058"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.113544 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.116224 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerStarted","Data":"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.116472 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.118807 4727 generic.go:334] "Generic (PLEG): container finished" podID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerID="1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" exitCode=0 Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.118862 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerDied","Data":"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.118890 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" event={"ID":"d88b93c8-236e-4b94-bd57-1e0259dd748e","Type":"ContainerDied","Data":"9642df6ccb2e02a23fe8e2b3c3100f4f75a22186bc65d70d2555faecfb1f1240"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.118910 4727 scope.go:117] "RemoveContainer" containerID="1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.119049 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-pdq66" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.122495 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerStarted","Data":"c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3"} Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.137789 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.038005541 podStartE2EDuration="3.137767238s" podCreationTimestamp="2026-01-09 11:03:42 +0000 UTC" firstStartedPulling="2026-01-09 11:03:43.583928201 +0000 UTC m=+1069.033832982" lastFinishedPulling="2026-01-09 11:03:44.683689898 +0000 UTC m=+1070.133594679" observedRunningTime="2026-01-09 11:03:45.134456935 +0000 UTC m=+1070.584361716" watchObservedRunningTime="2026-01-09 11:03:45.137767238 +0000 UTC m=+1070.587672019" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.147004 4727 scope.go:117] "RemoveContainer" containerID="58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.168779 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" podStartSLOduration=3.168759484 podStartE2EDuration="3.168759484s" podCreationTimestamp="2026-01-09 11:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:03:45.159102379 +0000 UTC m=+1070.609007200" watchObservedRunningTime="2026-01-09 11:03:45.168759484 +0000 UTC m=+1070.618664265" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.178623 4727 scope.go:117] "RemoveContainer" containerID="1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" Jan 09 11:03:45 crc kubenswrapper[4727]: E0109 11:03:45.179117 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5\": container with ID starting with 1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5 not found: ID does not exist" containerID="1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.179187 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5"} err="failed to get container status \"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5\": rpc error: code = NotFound desc = could not find container \"1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5\": container with ID starting with 1fa7673b8caf258b11402a8cf8d3f4db2205d26a69403c8f44fce8b47578f0e5 not found: ID does not exist" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.179220 4727 scope.go:117] "RemoveContainer" containerID="58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc" Jan 09 11:03:45 crc kubenswrapper[4727]: E0109 11:03:45.179616 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc\": container with ID starting with 58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc not found: ID does not exist" containerID="58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.179689 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc"} err="failed to get container status \"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc\": rpc error: code = NotFound desc = could not find container \"58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc\": container with ID starting with 58348074078b935618d96dfa3cba4b6096f46dec0c7b19992a461deb03f500cc not found: ID does not exist" Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.190873 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.203065 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-pdq66"] Jan 09 11:03:45 crc kubenswrapper[4727]: I0109 11:03:45.205371 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" podStartSLOduration=3.205347271 podStartE2EDuration="3.205347271s" podCreationTimestamp="2026-01-09 11:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:03:45.200000565 +0000 UTC m=+1070.649905346" watchObservedRunningTime="2026-01-09 11:03:45.205347271 +0000 UTC m=+1070.655252052" Jan 09 11:03:46 crc kubenswrapper[4727]: I0109 11:03:46.133282 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:46 crc kubenswrapper[4727]: I0109 11:03:46.221717 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 09 11:03:46 crc kubenswrapper[4727]: I0109 11:03:46.875195 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" path="/var/lib/kubelet/pods/d88b93c8-236e-4b94-bd57-1e0259dd748e/volumes" Jan 09 11:03:49 crc kubenswrapper[4727]: I0109 11:03:49.574393 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 09 11:03:49 crc kubenswrapper[4727]: I0109 11:03:49.574845 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 09 11:03:50 crc kubenswrapper[4727]: I0109 11:03:50.920379 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:50 crc kubenswrapper[4727]: I0109 11:03:50.920487 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:52 crc kubenswrapper[4727]: I0109 11:03:52.790120 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.014565 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.014842 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" containerID="cri-o://c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3" gracePeriod=10 Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.015730 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.070550 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:03:53 crc kubenswrapper[4727]: E0109 11:03:53.072622 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="dnsmasq-dns" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.072655 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="dnsmasq-dns" Jan 09 11:03:53 crc kubenswrapper[4727]: E0109 11:03:53.072702 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="init" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.072710 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="init" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.072982 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88b93c8-236e-4b94-bd57-1e0259dd748e" containerName="dnsmasq-dns" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.074248 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.089460 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.109563 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227349 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227465 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227525 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227687 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.227797 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.329634 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.330701 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.330750 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.330849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.330891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.352351 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") pod \"dnsmasq-dns-698758b865-rj6lv\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.393962 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:53 crc kubenswrapper[4727]: I0109 11:03:53.912316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:03:53 crc kubenswrapper[4727]: W0109 11:03:53.915661 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72decd78_911c_43ff_9f4e_0d99d71cf84b.slice/crio-ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db WatchSource:0}: Error finding container ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db: Status 404 returned error can't find the container with id ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.112357 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.120760 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.125906 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.126688 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ql8vj" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.127450 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.127804 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.163227 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.210022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerStarted","Data":"ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db"} Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.247658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.247824 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-cache\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.247909 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fb5d\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-kube-api-access-9fb5d\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.248155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-lock\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.248246 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.350684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.350928 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.351047 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-cache\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.351121 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.351236 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:03:54.85119943 +0000 UTC m=+1080.301104251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fb5d\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-kube-api-access-9fb5d\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351461 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351795 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-lock\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.351990 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-cache\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.352352 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-lock\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.400608 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fb5d\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-kube-api-access-9fb5d\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.412682 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.451551 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-t2qwp"] Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.452756 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.455499 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.455617 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.463353 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t2qwp"] Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.467862 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555594 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555743 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555856 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.555966 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.556136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.556277 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657438 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657597 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657641 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657740 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.657790 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.658281 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.658483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.658528 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.662053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.662284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.662729 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.676681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") pod \"swift-ring-rebalance-t2qwp\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.799619 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:03:54 crc kubenswrapper[4727]: I0109 11:03:54.860980 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.861361 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.861400 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:03:54 crc kubenswrapper[4727]: E0109 11:03:54.861475 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:03:55.861447434 +0000 UTC m=+1081.311352215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.221208 4727 generic.go:334] "Generic (PLEG): container finished" podID="9af0367c-139f-443d-9b2b-54908e88f39c" containerID="c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3" exitCode=0 Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.221301 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerDied","Data":"c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3"} Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.223411 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerStarted","Data":"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1"} Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.280673 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.312423 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t2qwp"] Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.403114 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="398bfc2d-be02-491c-af23-69fc4fc24817" containerName="galera" probeResult="failure" output=< Jan 09 11:03:55 crc kubenswrapper[4727]: wsrep_local_state_comment (Joined) differs from Synced Jan 09 11:03:55 crc kubenswrapper[4727]: > Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.527321 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.623870 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.626647 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680306 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680468 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680610 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.680703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") pod \"9af0367c-139f-443d-9b2b-54908e88f39c\" (UID: \"9af0367c-139f-443d-9b2b-54908e88f39c\") " Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.689824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v" (OuterVolumeSpecName: "kube-api-access-jql8v") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "kube-api-access-jql8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.743046 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.744966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.752592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config" (OuterVolumeSpecName: "config") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.769066 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9af0367c-139f-443d-9b2b-54908e88f39c" (UID: "9af0367c-139f-443d-9b2b-54908e88f39c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785521 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785560 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785571 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785584 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jql8v\" (UniqueName: \"kubernetes.io/projected/9af0367c-139f-443d-9b2b-54908e88f39c-kube-api-access-jql8v\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.785596 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af0367c-139f-443d-9b2b-54908e88f39c-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:03:55 crc kubenswrapper[4727]: I0109 11:03:55.887341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:55 crc kubenswrapper[4727]: E0109 11:03:55.887636 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:03:55 crc kubenswrapper[4727]: E0109 11:03:55.887680 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:03:55 crc kubenswrapper[4727]: E0109 11:03:55.887789 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:03:57.887755449 +0000 UTC m=+1083.337660240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.235052 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t2qwp" event={"ID":"5a7df215-53c5-4771-95de-9af59255b3de","Type":"ContainerStarted","Data":"7defc95c6498d89e6da8f7e9594f0703896df6675e2dac5d432b4b32dce7536c"} Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.236989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" event={"ID":"9af0367c-139f-443d-9b2b-54908e88f39c","Type":"ContainerDied","Data":"64f12a2e916ba0978736fe5fcb0ce8bed71a92aea02c9a9e4d93c6d88a07c4ec"} Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.237028 4727 scope.go:117] "RemoveContainer" containerID="c7095eda1d9a83ea05c0e919f72c9c7f440662b448029091e10868df44ba17e3" Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.237102 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-shfxd" Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.238847 4727 generic.go:334] "Generic (PLEG): container finished" podID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerID="e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1" exitCode=0 Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.240338 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerDied","Data":"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1"} Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.352382 4727 scope.go:117] "RemoveContainer" containerID="87def1a1e5b96c750eada21838e69b6f07dfd2503065dbb58dd428a9c0764731" Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.372484 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.399605 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-shfxd"] Jan 09 11:03:56 crc kubenswrapper[4727]: I0109 11:03:56.873458 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" path="/var/lib/kubelet/pods/9af0367c-139f-443d-9b2b-54908e88f39c/volumes" Jan 09 11:03:57 crc kubenswrapper[4727]: I0109 11:03:57.251031 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerStarted","Data":"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd"} Jan 09 11:03:57 crc kubenswrapper[4727]: I0109 11:03:57.251802 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:03:57 crc kubenswrapper[4727]: I0109 11:03:57.279304 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-rj6lv" podStartSLOduration=4.279285221 podStartE2EDuration="4.279285221s" podCreationTimestamp="2026-01-09 11:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:03:57.273472884 +0000 UTC m=+1082.723377665" watchObservedRunningTime="2026-01-09 11:03:57.279285221 +0000 UTC m=+1082.729189992" Jan 09 11:03:57 crc kubenswrapper[4727]: I0109 11:03:57.927704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:03:57 crc kubenswrapper[4727]: E0109 11:03:57.927953 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:03:57 crc kubenswrapper[4727]: E0109 11:03:57.927982 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:03:57 crc kubenswrapper[4727]: E0109 11:03:57.928049 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:04:01.928025965 +0000 UTC m=+1087.377930746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:03:58 crc kubenswrapper[4727]: I0109 11:03:58.148677 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.271633 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t2qwp" event={"ID":"5a7df215-53c5-4771-95de-9af59255b3de","Type":"ContainerStarted","Data":"fd08e66593fb75731b4677b270f51d5fcb873007a0ff1b0eec358d5c628765c7"} Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.301779 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-t2qwp" podStartSLOduration=2.033966724 podStartE2EDuration="5.301756146s" podCreationTimestamp="2026-01-09 11:03:54 +0000 UTC" firstStartedPulling="2026-01-09 11:03:55.324268535 +0000 UTC m=+1080.774173316" lastFinishedPulling="2026-01-09 11:03:58.592057957 +0000 UTC m=+1084.041962738" observedRunningTime="2026-01-09 11:03:59.297535629 +0000 UTC m=+1084.747440420" watchObservedRunningTime="2026-01-09 11:03:59.301756146 +0000 UTC m=+1084.751660927" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.671849 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.679614 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:03:59 crc kubenswrapper[4727]: E0109 11:03:59.679959 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="init" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.679978 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="init" Jan 09 11:03:59 crc kubenswrapper[4727]: E0109 11:03:59.680020 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.680027 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.680213 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af0367c-139f-443d-9b2b-54908e88f39c" containerName="dnsmasq-dns" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.680824 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.682933 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.693490 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.763969 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.764112 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.875164 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.879876 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.882649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:03:59 crc kubenswrapper[4727]: I0109 11:03:59.926999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") pod \"root-account-create-update-mks9d\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " pod="openstack/root-account-create-update-mks9d" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.039659 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mks9d" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.503296 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.815399 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.817914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.822706 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.905284 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.905421 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.911294 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.913013 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.916145 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 09 11:04:00 crc kubenswrapper[4727]: I0109 11:04:00.920144 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.007088 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.007261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.007288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.007612 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.008558 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.039561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") pod \"keystone-db-create-6qxrb\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.109285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.109354 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.110247 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.119452 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.120833 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.132110 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.147364 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") pod \"keystone-7a4c-account-create-update-p6w9f\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.148912 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.211608 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.211690 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.213855 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.215333 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.225650 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.232486 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.232934 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.291159 4727 generic.go:334] "Generic (PLEG): container finished" podID="8c043374-06a3-4cb4-b105-d448282169b0" containerID="508aae6e73476bd7d8554f7bf79128adfc2937e36453761ce5d6c273144e8c65" exitCode=0 Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.291216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mks9d" event={"ID":"8c043374-06a3-4cb4-b105-d448282169b0","Type":"ContainerDied","Data":"508aae6e73476bd7d8554f7bf79128adfc2937e36453761ce5d6c273144e8c65"} Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.291247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mks9d" event={"ID":"8c043374-06a3-4cb4-b105-d448282169b0","Type":"ContainerStarted","Data":"2bf3d7bfd9ff5c75a3c4b900a3397369ea2ca9a10f62fb2d85e7b9615be81997"} Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.316146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.316625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.316680 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.316771 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.317580 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.338034 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") pod \"placement-db-create-j2gst\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.418225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.418404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.419381 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.434498 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") pod \"placement-9ce5-account-create-update-cgwt7\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.533422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2gst" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.563076 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.714046 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.746297 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.929724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:04:01 crc kubenswrapper[4727]: E0109 11:04:01.930439 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 09 11:04:01 crc kubenswrapper[4727]: E0109 11:04:01.930484 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 09 11:04:01 crc kubenswrapper[4727]: E0109 11:04:01.930581 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift podName:b71205e9-ee26-48fb-aeeb-58eaee9ac9cf nodeName:}" failed. No retries permitted until 2026-01-09 11:04:09.930554282 +0000 UTC m=+1095.380459073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift") pod "swift-storage-0" (UID: "b71205e9-ee26-48fb-aeeb-58eaee9ac9cf") : configmap "swift-ring-files" not found Jan 09 11:04:01 crc kubenswrapper[4727]: I0109 11:04:01.998621 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.083485 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.308175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2gst" event={"ID":"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9","Type":"ContainerStarted","Data":"1f51dfdd818fb14101b6433f917a21c93101b4a9ea8fc4d6f3cec7bd10455ed9"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.309941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9ce5-account-create-update-cgwt7" event={"ID":"b5dba580-00b4-4bed-a734-78ac96b5cd4d","Type":"ContainerStarted","Data":"60aca13f224fb56772702304d509e56421ee68091611ab02f268739a0d563f53"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.312971 4727 generic.go:334] "Generic (PLEG): container finished" podID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" containerID="d6959b7da986b00bc70e51fdf39956f346afe58b899a2e451f5f896031407d83" exitCode=0 Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.313589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a4c-account-create-update-p6w9f" event={"ID":"b3fe1de7-6846-464a-8c23-b5cbc944ffaf","Type":"ContainerDied","Data":"d6959b7da986b00bc70e51fdf39956f346afe58b899a2e451f5f896031407d83"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.313746 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a4c-account-create-update-p6w9f" event={"ID":"b3fe1de7-6846-464a-8c23-b5cbc944ffaf","Type":"ContainerStarted","Data":"6f39dc3c1660375ce3eb1d5f1b04d23e1399e5dcae67e0677da400036b1de267"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.316954 4727 generic.go:334] "Generic (PLEG): container finished" podID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" containerID="dfac37bf01ecc72f7cbe4e36980b1d63912e58d44854fd22b7eb51acb67a3482" exitCode=0 Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.317203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qxrb" event={"ID":"c54e2e39-4fb7-4ccb-98e4-437653bcc01c","Type":"ContainerDied","Data":"dfac37bf01ecc72f7cbe4e36980b1d63912e58d44854fd22b7eb51acb67a3482"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.317339 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qxrb" event={"ID":"c54e2e39-4fb7-4ccb-98e4-437653bcc01c","Type":"ContainerStarted","Data":"e8f6190c7b981e11fe33deef696ee9dea4febb2c1b83c6e6bb5170c230e79959"} Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.825455 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mks9d" Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.951353 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") pod \"8c043374-06a3-4cb4-b105-d448282169b0\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.951950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") pod \"8c043374-06a3-4cb4-b105-d448282169b0\" (UID: \"8c043374-06a3-4cb4-b105-d448282169b0\") " Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.953994 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c043374-06a3-4cb4-b105-d448282169b0" (UID: "8c043374-06a3-4cb4-b105-d448282169b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:02 crc kubenswrapper[4727]: I0109 11:04:02.970353 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k" (OuterVolumeSpecName: "kube-api-access-5bq7k") pod "8c043374-06a3-4cb4-b105-d448282169b0" (UID: "8c043374-06a3-4cb4-b105-d448282169b0"). InnerVolumeSpecName "kube-api-access-5bq7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.055160 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bq7k\" (UniqueName: \"kubernetes.io/projected/8c043374-06a3-4cb4-b105-d448282169b0-kube-api-access-5bq7k\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.055222 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c043374-06a3-4cb4-b105-d448282169b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.330589 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerID="4e6882c4f32dec9e5098ba742e2c34d151d018e9f63b15aa14f663a278aa1af0" exitCode=0 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.330693 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerDied","Data":"4e6882c4f32dec9e5098ba742e2c34d151d018e9f63b15aa14f663a278aa1af0"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.341917 4727 generic.go:334] "Generic (PLEG): container finished" podID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" containerID="a4b50d5c7e5a2ac088b99192a0ef8ae1f0162a1bb12adc59cf61c748194423e5" exitCode=0 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.342021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2gst" event={"ID":"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9","Type":"ContainerDied","Data":"a4b50d5c7e5a2ac088b99192a0ef8ae1f0162a1bb12adc59cf61c748194423e5"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.347060 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" containerID="00e330dc8e4d5563bc7056af16edc5bfdbab81ae265d410bf050c38028359c89" exitCode=0 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.347265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9ce5-account-create-update-cgwt7" event={"ID":"b5dba580-00b4-4bed-a734-78ac96b5cd4d","Type":"ContainerDied","Data":"00e330dc8e4d5563bc7056af16edc5bfdbab81ae265d410bf050c38028359c89"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.356085 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mks9d" event={"ID":"8c043374-06a3-4cb4-b105-d448282169b0","Type":"ContainerDied","Data":"2bf3d7bfd9ff5c75a3c4b900a3397369ea2ca9a10f62fb2d85e7b9615be81997"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.356157 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf3d7bfd9ff5c75a3c4b900a3397369ea2ca9a10f62fb2d85e7b9615be81997" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.356263 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mks9d" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.363158 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerID="fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3" exitCode=0 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.363642 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerDied","Data":"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3"} Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.395870 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.564733 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.565011 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="dnsmasq-dns" containerID="cri-o://f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" gracePeriod=10 Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.787580 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.854213 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.879076 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") pod \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.879120 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") pod \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\" (UID: \"c54e2e39-4fb7-4ccb-98e4-437653bcc01c\") " Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.879284 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") pod \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.879398 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") pod \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\" (UID: \"b3fe1de7-6846-464a-8c23-b5cbc944ffaf\") " Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.880899 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3fe1de7-6846-464a-8c23-b5cbc944ffaf" (UID: "b3fe1de7-6846-464a-8c23-b5cbc944ffaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.883730 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c54e2e39-4fb7-4ccb-98e4-437653bcc01c" (UID: "c54e2e39-4fb7-4ccb-98e4-437653bcc01c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.887330 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h" (OuterVolumeSpecName: "kube-api-access-w9s5h") pod "c54e2e39-4fb7-4ccb-98e4-437653bcc01c" (UID: "c54e2e39-4fb7-4ccb-98e4-437653bcc01c"). InnerVolumeSpecName "kube-api-access-w9s5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.888595 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx" (OuterVolumeSpecName: "kube-api-access-x7rpx") pod "b3fe1de7-6846-464a-8c23-b5cbc944ffaf" (UID: "b3fe1de7-6846-464a-8c23-b5cbc944ffaf"). InnerVolumeSpecName "kube-api-access-x7rpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.981807 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rpx\" (UniqueName: \"kubernetes.io/projected/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-kube-api-access-x7rpx\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.981854 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9s5h\" (UniqueName: \"kubernetes.io/projected/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-kube-api-access-w9s5h\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.981869 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c54e2e39-4fb7-4ccb-98e4-437653bcc01c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:03 crc kubenswrapper[4727]: I0109 11:04:03.981884 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3fe1de7-6846-464a-8c23-b5cbc944ffaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.167347 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.184211 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") pod \"7e8482c2-67f7-40f6-b225-af6914eed5c7\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.184368 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") pod \"7e8482c2-67f7-40f6-b225-af6914eed5c7\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.184400 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") pod \"7e8482c2-67f7-40f6-b225-af6914eed5c7\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.184437 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") pod \"7e8482c2-67f7-40f6-b225-af6914eed5c7\" (UID: \"7e8482c2-67f7-40f6-b225-af6914eed5c7\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.195256 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv" (OuterVolumeSpecName: "kube-api-access-5fltv") pod "7e8482c2-67f7-40f6-b225-af6914eed5c7" (UID: "7e8482c2-67f7-40f6-b225-af6914eed5c7"). InnerVolumeSpecName "kube-api-access-5fltv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.244180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config" (OuterVolumeSpecName: "config") pod "7e8482c2-67f7-40f6-b225-af6914eed5c7" (UID: "7e8482c2-67f7-40f6-b225-af6914eed5c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.244909 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e8482c2-67f7-40f6-b225-af6914eed5c7" (UID: "7e8482c2-67f7-40f6-b225-af6914eed5c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.253484 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e8482c2-67f7-40f6-b225-af6914eed5c7" (UID: "7e8482c2-67f7-40f6-b225-af6914eed5c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.286750 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fltv\" (UniqueName: \"kubernetes.io/projected/7e8482c2-67f7-40f6-b225-af6914eed5c7-kube-api-access-5fltv\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.286789 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.286799 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.286808 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8482c2-67f7-40f6-b225-af6914eed5c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.373309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7a4c-account-create-update-p6w9f" event={"ID":"b3fe1de7-6846-464a-8c23-b5cbc944ffaf","Type":"ContainerDied","Data":"6f39dc3c1660375ce3eb1d5f1b04d23e1399e5dcae67e0677da400036b1de267"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.373653 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f39dc3c1660375ce3eb1d5f1b04d23e1399e5dcae67e0677da400036b1de267" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.373335 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7a4c-account-create-update-p6w9f" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.376184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerStarted","Data":"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.376443 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.378106 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6qxrb" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.378105 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6qxrb" event={"ID":"c54e2e39-4fb7-4ccb-98e4-437653bcc01c","Type":"ContainerDied","Data":"e8f6190c7b981e11fe33deef696ee9dea4febb2c1b83c6e6bb5170c230e79959"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.378325 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f6190c7b981e11fe33deef696ee9dea4febb2c1b83c6e6bb5170c230e79959" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.380681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerStarted","Data":"9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.380966 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383057 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerID="f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" exitCode=0 Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383277 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383533 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerDied","Data":"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s8759" event={"ID":"7e8482c2-67f7-40f6-b225-af6914eed5c7","Type":"ContainerDied","Data":"90625e00836e35ec42870d8838b1bad64246fb7214b4d03011fe48a0e3903723"} Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.383588 4727 scope.go:117] "RemoveContainer" containerID="f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.433695 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.985151214 podStartE2EDuration="58.43366768s" podCreationTimestamp="2026-01-09 11:03:06 +0000 UTC" firstStartedPulling="2026-01-09 11:03:08.949566614 +0000 UTC m=+1034.399471395" lastFinishedPulling="2026-01-09 11:03:29.39808308 +0000 UTC m=+1054.847987861" observedRunningTime="2026-01-09 11:04:04.408835551 +0000 UTC m=+1089.858740352" watchObservedRunningTime="2026-01-09 11:04:04.43366768 +0000 UTC m=+1089.883572461" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.446184 4727 scope.go:117] "RemoveContainer" containerID="2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.459986 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.752940661 podStartE2EDuration="58.459965247s" podCreationTimestamp="2026-01-09 11:03:06 +0000 UTC" firstStartedPulling="2026-01-09 11:03:08.620383342 +0000 UTC m=+1034.070288123" lastFinishedPulling="2026-01-09 11:03:29.327407928 +0000 UTC m=+1054.777312709" observedRunningTime="2026-01-09 11:04:04.457561575 +0000 UTC m=+1089.907466376" watchObservedRunningTime="2026-01-09 11:04:04.459965247 +0000 UTC m=+1089.909870038" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.484601 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.497158 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s8759"] Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.498755 4727 scope.go:117] "RemoveContainer" containerID="f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" Jan 09 11:04:04 crc kubenswrapper[4727]: E0109 11:04:04.502623 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276\": container with ID starting with f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276 not found: ID does not exist" containerID="f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.502666 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276"} err="failed to get container status \"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276\": rpc error: code = NotFound desc = could not find container \"f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276\": container with ID starting with f823a7f2e47f6c10023076e5894169ababd1b7beebfa352d8b450fa9c6a2f276 not found: ID does not exist" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.502697 4727 scope.go:117] "RemoveContainer" containerID="2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86" Jan 09 11:04:04 crc kubenswrapper[4727]: E0109 11:04:04.507261 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86\": container with ID starting with 2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86 not found: ID does not exist" containerID="2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.507304 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86"} err="failed to get container status \"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86\": rpc error: code = NotFound desc = could not find container \"2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86\": container with ID starting with 2d569fbc60a788b257d8ff01821472d120263f3ecee8c78b02f4723b8578af86 not found: ID does not exist" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.810570 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.876011 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" path="/var/lib/kubelet/pods/7e8482c2-67f7-40f6-b225-af6914eed5c7/volumes" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.888888 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2gst" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.899105 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") pod \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.899148 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") pod \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.899231 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") pod \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\" (UID: \"b5dba580-00b4-4bed-a734-78ac96b5cd4d\") " Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.901770 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5dba580-00b4-4bed-a734-78ac96b5cd4d" (UID: "b5dba580-00b4-4bed-a734-78ac96b5cd4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.906337 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95" (OuterVolumeSpecName: "kube-api-access-b7w95") pod "9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" (UID: "9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9"). InnerVolumeSpecName "kube-api-access-b7w95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:04 crc kubenswrapper[4727]: I0109 11:04:04.906766 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7" (OuterVolumeSpecName: "kube-api-access-9lpr7") pod "b5dba580-00b4-4bed-a734-78ac96b5cd4d" (UID: "b5dba580-00b4-4bed-a734-78ac96b5cd4d"). InnerVolumeSpecName "kube-api-access-9lpr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.000851 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") pod \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\" (UID: \"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9\") " Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.001136 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5dba580-00b4-4bed-a734-78ac96b5cd4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.001154 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7w95\" (UniqueName: \"kubernetes.io/projected/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-kube-api-access-b7w95\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.001166 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lpr7\" (UniqueName: \"kubernetes.io/projected/b5dba580-00b4-4bed-a734-78ac96b5cd4d-kube-api-access-9lpr7\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.001358 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" (UID: "9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.102433 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.392482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j2gst" event={"ID":"9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9","Type":"ContainerDied","Data":"1f51dfdd818fb14101b6433f917a21c93101b4a9ea8fc4d6f3cec7bd10455ed9"} Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.392549 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f51dfdd818fb14101b6433f917a21c93101b4a9ea8fc4d6f3cec7bd10455ed9" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.392601 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j2gst" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.400150 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9ce5-account-create-update-cgwt7" event={"ID":"b5dba580-00b4-4bed-a734-78ac96b5cd4d","Type":"ContainerDied","Data":"60aca13f224fb56772702304d509e56421ee68091611ab02f268739a0d563f53"} Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.400224 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60aca13f224fb56772702304d509e56421ee68091611ab02f268739a0d563f53" Jan 09 11:04:05 crc kubenswrapper[4727]: I0109 11:04:05.400452 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9ce5-account-create-update-cgwt7" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.373635 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374239 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c043374-06a3-4cb4-b105-d448282169b0" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374253 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c043374-06a3-4cb4-b105-d448282169b0" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374268 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374274 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374284 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374290 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374307 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="init" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374312 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="init" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374324 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374329 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374342 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374348 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: E0109 11:04:06.374356 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="dnsmasq-dns" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374361 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="dnsmasq-dns" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374532 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374546 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8482c2-67f7-40f6-b225-af6914eed5c7" containerName="dnsmasq-dns" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374562 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374572 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374582 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" containerName="mariadb-database-create" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.374594 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c043374-06a3-4cb4-b105-d448282169b0" containerName="mariadb-account-create-update" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.375112 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.389467 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.480658 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.482391 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.488559 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.495895 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.530558 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.530625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.632427 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.632577 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.632626 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.632712 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.633795 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.650864 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") pod \"glance-db-create-m6676\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.696391 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m6676" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.734352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.734735 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.736005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.765959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") pod \"glance-65a5-account-create-update-swhhc\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:06 crc kubenswrapper[4727]: I0109 11:04:06.802419 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.316307 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:04:07 crc kubenswrapper[4727]: W0109 11:04:07.320672 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e8ff110_0416_4e41_b9cf_a9f622e9a4c8.slice/crio-88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89 WatchSource:0}: Error finding container 88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89: Status 404 returned error can't find the container with id 88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89 Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.420533 4727 generic.go:334] "Generic (PLEG): container finished" podID="5a7df215-53c5-4771-95de-9af59255b3de" containerID="fd08e66593fb75731b4677b270f51d5fcb873007a0ff1b0eec358d5c628765c7" exitCode=0 Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.420557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t2qwp" event={"ID":"5a7df215-53c5-4771-95de-9af59255b3de","Type":"ContainerDied","Data":"fd08e66593fb75731b4677b270f51d5fcb873007a0ff1b0eec358d5c628765c7"} Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.423582 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m6676" event={"ID":"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8","Type":"ContainerStarted","Data":"88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89"} Jan 09 11:04:07 crc kubenswrapper[4727]: I0109 11:04:07.467661 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:04:07 crc kubenswrapper[4727]: W0109 11:04:07.470786 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5471acc_7f1a_4b92_babf_8dea0d8c5a5b.slice/crio-45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab WatchSource:0}: Error finding container 45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab: Status 404 returned error can't find the container with id 45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.201735 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.209161 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mks9d"] Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.278710 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.280141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.282996 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.291596 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.373059 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.373449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.432203 4727 generic.go:334] "Generic (PLEG): container finished" podID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" containerID="538236df2e722658ac6062177b9a40be31fb73d68537a811c36bed8ec6ebd0f2" exitCode=0 Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.432262 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m6676" event={"ID":"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8","Type":"ContainerDied","Data":"538236df2e722658ac6062177b9a40be31fb73d68537a811c36bed8ec6ebd0f2"} Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.434035 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" containerID="29e8e8db2a35769af205e4fe07dfcb0f161be2135de38c69be53aa1504c48cb3" exitCode=0 Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.434132 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65a5-account-create-update-swhhc" event={"ID":"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b","Type":"ContainerDied","Data":"29e8e8db2a35769af205e4fe07dfcb0f161be2135de38c69be53aa1504c48cb3"} Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.434192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65a5-account-create-update-swhhc" event={"ID":"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b","Type":"ContainerStarted","Data":"45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab"} Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.481628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.481760 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.482722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.500978 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") pod \"root-account-create-update-j9h4f\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.640757 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.820937 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.878681 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c043374-06a3-4cb4-b105-d448282169b0" path="/var/lib/kubelet/pods/8c043374-06a3-4cb4-b105-d448282169b0/volumes" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991497 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991629 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991769 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991897 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991926 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.991992 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") pod \"5a7df215-53c5-4771-95de-9af59255b3de\" (UID: \"5a7df215-53c5-4771-95de-9af59255b3de\") " Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.993885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.995535 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:04:08 crc kubenswrapper[4727]: I0109 11:04:08.999197 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2" (OuterVolumeSpecName: "kube-api-access-d5kn2") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "kube-api-access-d5kn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.000905 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.031134 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts" (OuterVolumeSpecName: "scripts") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.047599 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.064447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a7df215-53c5-4771-95de-9af59255b3de" (UID: "5a7df215-53c5-4771-95de-9af59255b3de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093682 4727 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093715 4727 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093725 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5kn2\" (UniqueName: \"kubernetes.io/projected/5a7df215-53c5-4771-95de-9af59255b3de-kube-api-access-d5kn2\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093737 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a7df215-53c5-4771-95de-9af59255b3de-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093748 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093757 4727 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a7df215-53c5-4771-95de-9af59255b3de-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.093766 4727 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a7df215-53c5-4771-95de-9af59255b3de-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.099839 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:04:09 crc kubenswrapper[4727]: W0109 11:04:09.114851 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14fbdc64_2108_41db_88bd_d978e9ce6550.slice/crio-e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164 WatchSource:0}: Error finding container e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164: Status 404 returned error can't find the container with id e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164 Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.443065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j9h4f" event={"ID":"14fbdc64-2108-41db-88bd-d978e9ce6550","Type":"ContainerStarted","Data":"4b638c817b29ed248546a516c2f4dc54b3f00561caeb3b5322db912d38b8ae1d"} Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.443141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j9h4f" event={"ID":"14fbdc64-2108-41db-88bd-d978e9ce6550","Type":"ContainerStarted","Data":"e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164"} Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.444418 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-t2qwp" event={"ID":"5a7df215-53c5-4771-95de-9af59255b3de","Type":"ContainerDied","Data":"7defc95c6498d89e6da8f7e9594f0703896df6675e2dac5d432b4b32dce7536c"} Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.444473 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7defc95c6498d89e6da8f7e9594f0703896df6675e2dac5d432b4b32dce7536c" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.444587 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2qwp" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.470344 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j9h4f" podStartSLOduration=1.470315588 podStartE2EDuration="1.470315588s" podCreationTimestamp="2026-01-09 11:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:04:09.465882455 +0000 UTC m=+1094.915787246" watchObservedRunningTime="2026-01-09 11:04:09.470315588 +0000 UTC m=+1094.920220379" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.934186 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.939028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:04:09 crc kubenswrapper[4727]: I0109 11:04:09.949108 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b71205e9-ee26-48fb-aeeb-58eaee9ac9cf-etc-swift\") pod \"swift-storage-0\" (UID: \"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf\") " pod="openstack/swift-storage-0" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.013934 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m6676" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.040376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") pod \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041094 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") pod \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041159 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") pod \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\" (UID: \"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b\") " Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041193 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") pod \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\" (UID: \"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8\") " Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041760 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" (UID: "b5471acc-7f1a-4b92-babf-8dea0d8c5a5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.041809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" (UID: "5e8ff110-0416-4e41-b9cf-a9f622e9a4c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.044524 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn" (OuterVolumeSpecName: "kube-api-access-97fqn") pod "5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" (UID: "5e8ff110-0416-4e41-b9cf-a9f622e9a4c8"). InnerVolumeSpecName "kube-api-access-97fqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.044619 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk" (OuterVolumeSpecName: "kube-api-access-7fbdk") pod "b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" (UID: "b5471acc-7f1a-4b92-babf-8dea0d8c5a5b"). InnerVolumeSpecName "kube-api-access-7fbdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.107129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.143128 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97fqn\" (UniqueName: \"kubernetes.io/projected/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-kube-api-access-97fqn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.143171 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fbdk\" (UniqueName: \"kubernetes.io/projected/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-kube-api-access-7fbdk\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.143185 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.143196 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.466159 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65a5-account-create-update-swhhc" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.466175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65a5-account-create-update-swhhc" event={"ID":"b5471acc-7f1a-4b92-babf-8dea0d8c5a5b","Type":"ContainerDied","Data":"45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab"} Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.466250 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f695b32c556232c261a8ada0585e1498c7b54d194adc2443a844049dd457ab" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.467984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m6676" event={"ID":"5e8ff110-0416-4e41-b9cf-a9f622e9a4c8","Type":"ContainerDied","Data":"88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89"} Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.468010 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88cccbd9230115ca7e56dfac9691250373a265f059b7cbc342bcc106c0a61f89" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.468101 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m6676" Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.469255 4727 generic.go:334] "Generic (PLEG): container finished" podID="14fbdc64-2108-41db-88bd-d978e9ce6550" containerID="4b638c817b29ed248546a516c2f4dc54b3f00561caeb3b5322db912d38b8ae1d" exitCode=0 Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.469291 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j9h4f" event={"ID":"14fbdc64-2108-41db-88bd-d978e9ce6550","Type":"ContainerDied","Data":"4b638c817b29ed248546a516c2f4dc54b3f00561caeb3b5322db912d38b8ae1d"} Jan 09 11:04:10 crc kubenswrapper[4727]: I0109 11:04:10.720691 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 09 11:04:10 crc kubenswrapper[4727]: W0109 11:04:10.726768 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb71205e9_ee26_48fb_aeeb_58eaee9ac9cf.slice/crio-cda445f07e154fc13d0569132741c977116bf4db69a0760bfa834790209cff29 WatchSource:0}: Error finding container cda445f07e154fc13d0569132741c977116bf4db69a0760bfa834790209cff29: Status 404 returned error can't find the container with id cda445f07e154fc13d0569132741c977116bf4db69a0760bfa834790209cff29 Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.353434 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mwrp2" podUID="d81594ff-04f5-47c2-9620-db583609e9aa" containerName="ovn-controller" probeResult="failure" output=< Jan 09 11:04:11 crc kubenswrapper[4727]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 09 11:04:11 crc kubenswrapper[4727]: > Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.397497 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.397781 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wxljq" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.480790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"cda445f07e154fc13d0569132741c977116bf4db69a0760bfa834790209cff29"} Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.629340 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:11 crc kubenswrapper[4727]: E0109 11:04:11.629887 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" containerName="mariadb-database-create" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.629917 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" containerName="mariadb-database-create" Jan 09 11:04:11 crc kubenswrapper[4727]: E0109 11:04:11.629927 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" containerName="mariadb-account-create-update" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.629937 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" containerName="mariadb-account-create-update" Jan 09 11:04:11 crc kubenswrapper[4727]: E0109 11:04:11.629964 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7df215-53c5-4771-95de-9af59255b3de" containerName="swift-ring-rebalance" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.629974 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7df215-53c5-4771-95de-9af59255b3de" containerName="swift-ring-rebalance" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.630189 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7df215-53c5-4771-95de-9af59255b3de" containerName="swift-ring-rebalance" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.630216 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" containerName="mariadb-account-create-update" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.630234 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" containerName="mariadb-database-create" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.631162 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: W0109 11:04:11.633605 4727 reflector.go:561] object-"openstack"/"ovncontroller-extra-scripts": failed to list *v1.ConfigMap: configmaps "ovncontroller-extra-scripts" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 09 11:04:11 crc kubenswrapper[4727]: E0109 11:04:11.633678 4727 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-extra-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovncontroller-extra-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.683175 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.782788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.782891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.782959 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.782990 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.783108 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.783148 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.820259 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.821950 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.827049 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lsgwk" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.827359 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.852574 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.887952 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.888982 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889075 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889359 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.889380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.891084 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.891226 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.895000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.913444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.993149 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.993214 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.993289 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:11 crc kubenswrapper[4727]: I0109 11:04:11.993326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.094766 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.095018 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.095147 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.095187 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.099969 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.100748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.111211 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.129258 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") pod \"glance-db-sync-4xh9m\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.195935 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.345220 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.402221 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") pod \"14fbdc64-2108-41db-88bd-d978e9ce6550\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.402631 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") pod \"14fbdc64-2108-41db-88bd-d978e9ce6550\" (UID: \"14fbdc64-2108-41db-88bd-d978e9ce6550\") " Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.404256 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14fbdc64-2108-41db-88bd-d978e9ce6550" (UID: "14fbdc64-2108-41db-88bd-d978e9ce6550"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.433949 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l" (OuterVolumeSpecName: "kube-api-access-hgf7l") pod "14fbdc64-2108-41db-88bd-d978e9ce6550" (UID: "14fbdc64-2108-41db-88bd-d978e9ce6550"). InnerVolumeSpecName "kube-api-access-hgf7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.500013 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j9h4f" event={"ID":"14fbdc64-2108-41db-88bd-d978e9ce6550","Type":"ContainerDied","Data":"e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164"} Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.501233 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b3033cdbe3b3afe65fcc8e51645d3f3e3df0bb474dab4f79db936b6f308164" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.501676 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j9h4f" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.506923 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14fbdc64-2108-41db-88bd-d978e9ce6550-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.506956 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgf7l\" (UniqueName: \"kubernetes.io/projected/14fbdc64-2108-41db-88bd-d978e9ce6550-kube-api-access-hgf7l\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.708523 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.710115 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") pod \"ovn-controller-mwrp2-config-rmlzz\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:12 crc kubenswrapper[4727]: I0109 11:04:12.871720 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.059672 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.222123 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:13 crc kubenswrapper[4727]: W0109 11:04:13.245427 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef375b35_8012_4b0a_8aae_b95e88229bcd.slice/crio-ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b WatchSource:0}: Error finding container ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b: Status 404 returned error can't find the container with id ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.510211 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"0610193605ed8e7c0c06c6965309dcfdd633bf38059da2cf5c4d111db7fbee40"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.510276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"e53fc0a831b11b95c3a849263dd21707f950fde33b7ea43c295ad58c7410e1c6"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.510291 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"52e33a498a1040a65fe5f8e0c1ffaa114b5f0f60b4d1deb5461c8f6a7b7a5b7d"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.510303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"12df1bdbadf6d7d355bdf4f0dd78448a115effa901893dca3ecd0d71d496e543"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.521135 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xh9m" event={"ID":"64657563-7e2f-46ef-a906-37e42398662a","Type":"ContainerStarted","Data":"863f21e160c716253c80003d82a8f94ef13eba15f96ed75ef0407b75d22b1fd7"} Jan 09 11:04:13 crc kubenswrapper[4727]: I0109 11:04:13.522473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-rmlzz" event={"ID":"ef375b35-8012-4b0a-8aae-b95e88229bcd","Type":"ContainerStarted","Data":"ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b"} Jan 09 11:04:14 crc kubenswrapper[4727]: I0109 11:04:14.532598 4727 generic.go:334] "Generic (PLEG): container finished" podID="ef375b35-8012-4b0a-8aae-b95e88229bcd" containerID="5456968a5bb394405d1937902e90ca9c687f3ec8600257fc65b14f86f0be1050" exitCode=0 Jan 09 11:04:14 crc kubenswrapper[4727]: I0109 11:04:14.532784 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-rmlzz" event={"ID":"ef375b35-8012-4b0a-8aae-b95e88229bcd","Type":"ContainerDied","Data":"5456968a5bb394405d1937902e90ca9c687f3ec8600257fc65b14f86f0be1050"} Jan 09 11:04:15 crc kubenswrapper[4727]: I0109 11:04:15.544641 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"6e46a1d193de258d86b97fb51b561f7d9eb130d5445274f3ae94ad67bec78835"} Jan 09 11:04:15 crc kubenswrapper[4727]: I0109 11:04:15.544977 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"5e39fe941769bf53adc7222084ae7536ec9c2a373c1d360a37f70bfde09a2fdc"} Jan 09 11:04:15 crc kubenswrapper[4727]: I0109 11:04:15.820161 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.005916 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.005986 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006016 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006040 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006109 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006208 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") pod \"ef375b35-8012-4b0a-8aae-b95e88229bcd\" (UID: \"ef375b35-8012-4b0a-8aae-b95e88229bcd\") " Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006324 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run" (OuterVolumeSpecName: "var-run") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006422 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006933 4727 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006967 4727 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.006979 4727 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ef375b35-8012-4b0a-8aae-b95e88229bcd-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.007313 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.007474 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts" (OuterVolumeSpecName: "scripts") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.015714 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv" (OuterVolumeSpecName: "kube-api-access-72wkv") pod "ef375b35-8012-4b0a-8aae-b95e88229bcd" (UID: "ef375b35-8012-4b0a-8aae-b95e88229bcd"). InnerVolumeSpecName "kube-api-access-72wkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.108485 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.109722 4727 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ef375b35-8012-4b0a-8aae-b95e88229bcd-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.109753 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72wkv\" (UniqueName: \"kubernetes.io/projected/ef375b35-8012-4b0a-8aae-b95e88229bcd-kube-api-access-72wkv\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.351998 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mwrp2" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.578984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"e962c3fe27b7e5dd9cdf6e8793b4e08269600f40d1ef2c69fd12ac8cc4ddcc7c"} Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.579033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"ec50fdd9c43320e397bf4728bf96742836509010dccf7afdaa0c3c08fa19ba83"} Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.584584 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-rmlzz" event={"ID":"ef375b35-8012-4b0a-8aae-b95e88229bcd","Type":"ContainerDied","Data":"ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b"} Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.584629 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7c02e579aeb5267546b4bd2135c21c2de0d032108ac2b1282c98d89c88992b" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.584687 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-rmlzz" Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.969443 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:16 crc kubenswrapper[4727]: I0109 11:04:16.979875 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mwrp2-config-rmlzz"] Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096006 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:17 crc kubenswrapper[4727]: E0109 11:04:17.096386 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef375b35-8012-4b0a-8aae-b95e88229bcd" containerName="ovn-config" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096401 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef375b35-8012-4b0a-8aae-b95e88229bcd" containerName="ovn-config" Jan 09 11:04:17 crc kubenswrapper[4727]: E0109 11:04:17.096413 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fbdc64-2108-41db-88bd-d978e9ce6550" containerName="mariadb-account-create-update" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096421 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fbdc64-2108-41db-88bd-d978e9ce6550" containerName="mariadb-account-create-update" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096634 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef375b35-8012-4b0a-8aae-b95e88229bcd" containerName="ovn-config" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.096655 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fbdc64-2108-41db-88bd-d978e9ce6550" containerName="mariadb-account-create-update" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.098473 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.111721 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.119784 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232476 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232584 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232617 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232739 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.232862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334754 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334886 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334909 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.334930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.335232 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.335282 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.335652 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.337932 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.338011 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.360799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") pod \"ovn-controller-mwrp2-config-k2cwc\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.456482 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.929466 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 09 11:04:17 crc kubenswrapper[4727]: I0109 11:04:17.952603 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.293841 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.411638 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.412970 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.492500 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.493648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.493685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.530350 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.531667 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.541618 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.542999 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.545751 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.547101 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.557953 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.598411 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.598992 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.599026 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.599070 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.601287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.648912 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") pod \"barbican-db-create-29t76\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.652303 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.653709 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.667526 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.679124 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.702694 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"c20d1e2d582bc7f3e0eb7d81bedadcb85a573b4d0b36134aa6e97e6e154971f0"} Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.702745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"45161b2783b38281c7608b606273cf4cbdcc1181b089d9ab210dbffe47203b2f"} Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.703626 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.703679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.703714 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.703758 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.704572 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.714006 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-k2cwc" event={"ID":"2a1ee6a4-df6b-475f-89b5-2387d3664091","Type":"ContainerStarted","Data":"4d7ec45dfec7c18bfb601f8431acbdb3c6a8e95fbad6f9a1130eb2d12aa29e66"} Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.733026 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") pod \"cinder-db-create-hwqw8\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.742225 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.743978 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.753445 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.753709 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.753842 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.754056 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dwjnt" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.775316 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.781923 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29t76" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.807385 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.812008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.812316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.812540 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.813401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.821184 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.822272 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.837189 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.843427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") pod \"cinder-43da-account-create-update-4whcc\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.889587 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef375b35-8012-4b0a-8aae-b95e88229bcd" path="/var/lib/kubelet/pods/ef375b35-8012-4b0a-8aae-b95e88229bcd/volumes" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914567 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914639 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914711 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.914751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.915844 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.916984 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.928814 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.929290 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.930493 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.939435 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.941270 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") pod \"barbican-1dcf-account-create-update-pmcnw\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:18 crc kubenswrapper[4727]: I0109 11:04:18.950484 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018728 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018788 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018838 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018959 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.018987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.019042 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.027308 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.028180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.028266 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.057905 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") pod \"keystone-db-sync-9gv8v\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.058826 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") pod \"neutron-db-create-rllkj\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.086629 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.102087 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.121535 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.122309 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.145306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.171774 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.262738 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.498564 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") pod \"neutron-d226-account-create-update-7gc64\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.548937 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.559776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.581014 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.731437 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.750571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29t76" event={"ID":"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb","Type":"ContainerStarted","Data":"65f2fff2a226cff0ca9637112b12f4e0cadddcbe5397a486e4b4742cb4ad3a57"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.793352 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"9389358f597f41d1b1e23b0c3a124fc67fe2b6d451b65dbbb428a1df6d2952f8"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.812015 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-k2cwc" event={"ID":"2a1ee6a4-df6b-475f-89b5-2387d3664091","Type":"ContainerStarted","Data":"978d1d0639986a01c899167d3627f579f640a9ec16babb304f6a9c41d9381181"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.814124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43da-account-create-update-4whcc" event={"ID":"22d06cd8-5172-4755-93f0-6c6aa036bed8","Type":"ContainerStarted","Data":"12421625aa6499759049a0d75177ae648ad7e0e2cd31f23558d670b3d4d0d249"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.816040 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dcf-account-create-update-pmcnw" event={"ID":"c1b70879-a5de-4ea1-9db1-82d9f0416a71","Type":"ContainerStarted","Data":"6dd13b5251934be21cbee5261f844ddb690fdea9fa0db87bb45d0ffc338ae4c4"} Jan 09 11:04:19 crc kubenswrapper[4727]: I0109 11:04:19.857632 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mwrp2-config-k2cwc" podStartSLOduration=2.8576075 podStartE2EDuration="2.8576075s" podCreationTimestamp="2026-01-09 11:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:04:19.846648436 +0000 UTC m=+1105.296553237" watchObservedRunningTime="2026-01-09 11:04:19.8576075 +0000 UTC m=+1105.307512281" Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.231466 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.246063 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.269657 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:04:20 crc kubenswrapper[4727]: E0109 11:04:20.558435 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod108eb21f_902c_4942_8be4_9a3b11146c25.slice/crio-958624eb08021ff7266f8cba72d352da3762bd6dc61b65c471a77ceb84f652f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc14bbd99_7e5d_48ab_8573_ad9c5eea68fb.slice/crio-d929058945f4f976a10c0ad4e38bc8bac084a324f08128e5ad431ba6df04130e.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.832036 4727 generic.go:334] "Generic (PLEG): container finished" podID="108eb21f-902c-4942-8be4-9a3b11146c25" containerID="958624eb08021ff7266f8cba72d352da3762bd6dc61b65c471a77ceb84f652f5" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.832663 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hwqw8" event={"ID":"108eb21f-902c-4942-8be4-9a3b11146c25","Type":"ContainerDied","Data":"958624eb08021ff7266f8cba72d352da3762bd6dc61b65c471a77ceb84f652f5"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.832692 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hwqw8" event={"ID":"108eb21f-902c-4942-8be4-9a3b11146c25","Type":"ContainerStarted","Data":"2116cbb0ea70d1e1a92b671155d1e85b1b6e41a668395bb8fced330e5e6d1ece"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.838589 4727 generic.go:334] "Generic (PLEG): container finished" podID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" containerID="d929058945f4f976a10c0ad4e38bc8bac084a324f08128e5ad431ba6df04130e" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.838685 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29t76" event={"ID":"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb","Type":"ContainerDied","Data":"d929058945f4f976a10c0ad4e38bc8bac084a324f08128e5ad431ba6df04130e"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.841446 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gv8v" event={"ID":"e5667805-aff5-4227-88df-2d2440259e9b","Type":"ContainerStarted","Data":"33185353540e45e975c16eee3ad01875091fa7bf07d875d2c477b2502139451f"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.843975 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a1ee6a4-df6b-475f-89b5-2387d3664091" containerID="978d1d0639986a01c899167d3627f579f640a9ec16babb304f6a9c41d9381181" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.844090 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-k2cwc" event={"ID":"2a1ee6a4-df6b-475f-89b5-2387d3664091","Type":"ContainerDied","Data":"978d1d0639986a01c899167d3627f579f640a9ec16babb304f6a9c41d9381181"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.852065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rllkj" event={"ID":"46480603-3f1d-4589-ba8e-9026edee07c7","Type":"ContainerStarted","Data":"bdfca0ed2919072c582cebffbacb441a947d9a6c744e51a2362b1387cc781911"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.857969 4727 generic.go:334] "Generic (PLEG): container finished" podID="22d06cd8-5172-4755-93f0-6c6aa036bed8" containerID="fd86d26604fa990daf0250e4ca92d0297bfeb8649e742dfecf596e5d32e6713b" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.858096 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43da-account-create-update-4whcc" event={"ID":"22d06cd8-5172-4755-93f0-6c6aa036bed8","Type":"ContainerDied","Data":"fd86d26604fa990daf0250e4ca92d0297bfeb8649e742dfecf596e5d32e6713b"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.864198 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" containerID="5afe7ea6f705be5c16f92e80a56b8b0f094dbbcf85b0af4db628a7dbbeab8019" exitCode=0 Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.875670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dcf-account-create-update-pmcnw" event={"ID":"c1b70879-a5de-4ea1-9db1-82d9f0416a71","Type":"ContainerDied","Data":"5afe7ea6f705be5c16f92e80a56b8b0f094dbbcf85b0af4db628a7dbbeab8019"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.875829 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d226-account-create-update-7gc64" event={"ID":"4ad382ed-924d-4c03-88b2-63d89690a56a","Type":"ContainerStarted","Data":"14f756c9d04da9228c97da74f1d1bbf739393fd403f464e8d09ae338dd94194f"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.916560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"0140e7dbdc255dcc4032eb1e51762ba2ce51bdd602f600e3921063b5ce0ea817"} Jan 09 11:04:20 crc kubenswrapper[4727]: I0109 11:04:20.916612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"07c7499980dde0afe32fb192c15140b9713a74ded26008ea6303b08d705a095b"} Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.932481 4727 generic.go:334] "Generic (PLEG): container finished" podID="4ad382ed-924d-4c03-88b2-63d89690a56a" containerID="8cbbc5a0e078338f400d60c2f06eefdbda48f9727dc50c6209388201bc809674" exitCode=0 Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.932896 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d226-account-create-update-7gc64" event={"ID":"4ad382ed-924d-4c03-88b2-63d89690a56a","Type":"ContainerDied","Data":"8cbbc5a0e078338f400d60c2f06eefdbda48f9727dc50c6209388201bc809674"} Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.965473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"76dcb50f7413cf7fdbb3bda5ea1e633c3dfc1d3d8958b9892fce69d3af15ffd9"} Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.965553 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b71205e9-ee26-48fb-aeeb-58eaee9ac9cf","Type":"ContainerStarted","Data":"5b0090596a500c66b0e4d37e7ce2d61925436a372e9b9b2c65d0c8ff5c0ee7fe"} Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.977132 4727 generic.go:334] "Generic (PLEG): container finished" podID="46480603-3f1d-4589-ba8e-9026edee07c7" containerID="1263ecb7bda875303dddab37976768c97598ef07433b73e25914d8e050a30df9" exitCode=0 Jan 09 11:04:21 crc kubenswrapper[4727]: I0109 11:04:21.977448 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rllkj" event={"ID":"46480603-3f1d-4589-ba8e-9026edee07c7","Type":"ContainerDied","Data":"1263ecb7bda875303dddab37976768c97598ef07433b73e25914d8e050a30df9"} Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.031714 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.373743476 podStartE2EDuration="29.031691282s" podCreationTimestamp="2026-01-09 11:03:53 +0000 UTC" firstStartedPulling="2026-01-09 11:04:10.728916211 +0000 UTC m=+1096.178820992" lastFinishedPulling="2026-01-09 11:04:17.386864017 +0000 UTC m=+1102.836768798" observedRunningTime="2026-01-09 11:04:22.012089449 +0000 UTC m=+1107.461994240" watchObservedRunningTime="2026-01-09 11:04:22.031691282 +0000 UTC m=+1107.481596063" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.298835 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.300447 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.305825 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.321913 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405342 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405439 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.405796 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507531 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507549 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.507608 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.508906 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.509596 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.509722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.509886 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.510483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.565448 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") pod \"dnsmasq-dns-77585f5f8c-s22jb\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:22 crc kubenswrapper[4727]: I0109 11:04:22.631605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.469602 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.475841 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.492750 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29t76" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574602 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") pod \"108eb21f-902c-4942-8be4-9a3b11146c25\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574684 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") pod \"108eb21f-902c-4942-8be4-9a3b11146c25\" (UID: \"108eb21f-902c-4942-8be4-9a3b11146c25\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574741 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") pod \"46480603-3f1d-4589-ba8e-9026edee07c7\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574798 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") pod \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574864 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") pod \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\" (UID: \"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.574900 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") pod \"46480603-3f1d-4589-ba8e-9026edee07c7\" (UID: \"46480603-3f1d-4589-ba8e-9026edee07c7\") " Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.575485 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "108eb21f-902c-4942-8be4-9a3b11146c25" (UID: "108eb21f-902c-4942-8be4-9a3b11146c25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.575490 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" (UID: "c14bbd99-7e5d-48ab-8573-ad9c5eea68fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.576221 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46480603-3f1d-4589-ba8e-9026edee07c7" (UID: "46480603-3f1d-4589-ba8e-9026edee07c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.580159 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc" (OuterVolumeSpecName: "kube-api-access-pjddc") pod "46480603-3f1d-4589-ba8e-9026edee07c7" (UID: "46480603-3f1d-4589-ba8e-9026edee07c7"). InnerVolumeSpecName "kube-api-access-pjddc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.580758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf" (OuterVolumeSpecName: "kube-api-access-hcpkf") pod "108eb21f-902c-4942-8be4-9a3b11146c25" (UID: "108eb21f-902c-4942-8be4-9a3b11146c25"). InnerVolumeSpecName "kube-api-access-hcpkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.582342 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4" (OuterVolumeSpecName: "kube-api-access-mvbz4") pod "c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" (UID: "c14bbd99-7e5d-48ab-8573-ad9c5eea68fb"). InnerVolumeSpecName "kube-api-access-mvbz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678180 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46480603-3f1d-4589-ba8e-9026edee07c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678229 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvbz4\" (UniqueName: \"kubernetes.io/projected/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-kube-api-access-mvbz4\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678242 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678251 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjddc\" (UniqueName: \"kubernetes.io/projected/46480603-3f1d-4589-ba8e-9026edee07c7-kube-api-access-pjddc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678261 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/108eb21f-902c-4942-8be4-9a3b11146c25-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:29 crc kubenswrapper[4727]: I0109 11:04:29.678269 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcpkf\" (UniqueName: \"kubernetes.io/projected/108eb21f-902c-4942-8be4-9a3b11146c25-kube-api-access-hcpkf\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.046693 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hwqw8" event={"ID":"108eb21f-902c-4942-8be4-9a3b11146c25","Type":"ContainerDied","Data":"2116cbb0ea70d1e1a92b671155d1e85b1b6e41a668395bb8fced330e5e6d1ece"} Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.046738 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2116cbb0ea70d1e1a92b671155d1e85b1b6e41a668395bb8fced330e5e6d1ece" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.046801 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hwqw8" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.055962 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29t76" event={"ID":"c14bbd99-7e5d-48ab-8573-ad9c5eea68fb","Type":"ContainerDied","Data":"65f2fff2a226cff0ca9637112b12f4e0cadddcbe5397a486e4b4742cb4ad3a57"} Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.056020 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65f2fff2a226cff0ca9637112b12f4e0cadddcbe5397a486e4b4742cb4ad3a57" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.056103 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29t76" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.063934 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rllkj" event={"ID":"46480603-3f1d-4589-ba8e-9026edee07c7","Type":"ContainerDied","Data":"bdfca0ed2919072c582cebffbacb441a947d9a6c744e51a2362b1387cc781911"} Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.063998 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdfca0ed2919072c582cebffbacb441a947d9a6c744e51a2362b1387cc781911" Jan 09 11:04:30 crc kubenswrapper[4727]: I0109 11:04:30.064049 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rllkj" Jan 09 11:04:34 crc kubenswrapper[4727]: E0109 11:04:34.199180 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Jan 09 11:04:34 crc kubenswrapper[4727]: E0109 11:04:34.202819 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kx4zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-9gv8v_openstack(e5667805-aff5-4227-88df-2d2440259e9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:04:34 crc kubenswrapper[4727]: E0109 11:04:34.204403 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-9gv8v" podUID="e5667805-aff5-4227-88df-2d2440259e9b" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.402000 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.427496 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.439952 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.466725 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.481793 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") pod \"4ad382ed-924d-4c03-88b2-63d89690a56a\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.481993 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") pod \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.482163 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") pod \"4ad382ed-924d-4c03-88b2-63d89690a56a\" (UID: \"4ad382ed-924d-4c03-88b2-63d89690a56a\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.482212 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") pod \"22d06cd8-5172-4755-93f0-6c6aa036bed8\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.482249 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") pod \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\" (UID: \"c1b70879-a5de-4ea1-9db1-82d9f0416a71\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.482746 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") pod \"22d06cd8-5172-4755-93f0-6c6aa036bed8\" (UID: \"22d06cd8-5172-4755-93f0-6c6aa036bed8\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.483921 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22d06cd8-5172-4755-93f0-6c6aa036bed8" (UID: "22d06cd8-5172-4755-93f0-6c6aa036bed8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.484285 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1b70879-a5de-4ea1-9db1-82d9f0416a71" (UID: "c1b70879-a5de-4ea1-9db1-82d9f0416a71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.484413 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ad382ed-924d-4c03-88b2-63d89690a56a" (UID: "4ad382ed-924d-4c03-88b2-63d89690a56a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.490906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84" (OuterVolumeSpecName: "kube-api-access-s9r84") pod "c1b70879-a5de-4ea1-9db1-82d9f0416a71" (UID: "c1b70879-a5de-4ea1-9db1-82d9f0416a71"). InnerVolumeSpecName "kube-api-access-s9r84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.491310 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad382ed-924d-4c03-88b2-63d89690a56a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.491336 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b70879-a5de-4ea1-9db1-82d9f0416a71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.491347 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d06cd8-5172-4755-93f0-6c6aa036bed8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.491356 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9r84\" (UniqueName: \"kubernetes.io/projected/c1b70879-a5de-4ea1-9db1-82d9f0416a71-kube-api-access-s9r84\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.495427 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq" (OuterVolumeSpecName: "kube-api-access-vhrnq") pod "4ad382ed-924d-4c03-88b2-63d89690a56a" (UID: "4ad382ed-924d-4c03-88b2-63d89690a56a"). InnerVolumeSpecName "kube-api-access-vhrnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.508284 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz" (OuterVolumeSpecName: "kube-api-access-vl9cz") pod "22d06cd8-5172-4755-93f0-6c6aa036bed8" (UID: "22d06cd8-5172-4755-93f0-6c6aa036bed8"). InnerVolumeSpecName "kube-api-access-vl9cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.592937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.593587 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594029 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594143 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594171 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594247 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.594395 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run" (OuterVolumeSpecName: "var-run") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.597783 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh" (OuterVolumeSpecName: "kube-api-access-88kfh") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "kube-api-access-88kfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.598729 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.598907 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") pod \"2a1ee6a4-df6b-475f-89b5-2387d3664091\" (UID: \"2a1ee6a4-df6b-475f-89b5-2387d3664091\") " Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599766 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599881 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhrnq\" (UniqueName: \"kubernetes.io/projected/4ad382ed-924d-4c03-88b2-63d89690a56a-kube-api-access-vhrnq\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599886 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts" (OuterVolumeSpecName: "scripts") pod "2a1ee6a4-df6b-475f-89b5-2387d3664091" (UID: "2a1ee6a4-df6b-475f-89b5-2387d3664091"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599899 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88kfh\" (UniqueName: \"kubernetes.io/projected/2a1ee6a4-df6b-475f-89b5-2387d3664091-kube-api-access-88kfh\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599938 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl9cz\" (UniqueName: \"kubernetes.io/projected/22d06cd8-5172-4755-93f0-6c6aa036bed8-kube-api-access-vl9cz\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599950 4727 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599963 4727 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.599974 4727 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a1ee6a4-df6b-475f-89b5-2387d3664091-var-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.702057 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.702157 4727 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1ee6a4-df6b-475f-89b5-2387d3664091-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:34 crc kubenswrapper[4727]: I0109 11:04:34.724210 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:34 crc kubenswrapper[4727]: W0109 11:04:34.733970 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb73609c1_ae60_4f6e_a0eb_e36b1fa9e977.slice/crio-77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0 WatchSource:0}: Error finding container 77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0: Status 404 returned error can't find the container with id 77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0 Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.108091 4727 generic.go:334] "Generic (PLEG): container finished" podID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerID="305d595a75c0483e8f124c062e4312746f4a5e5e0df8f72d52d1280623e0cba4" exitCode=0 Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.108201 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerDied","Data":"305d595a75c0483e8f124c062e4312746f4a5e5e0df8f72d52d1280623e0cba4"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.108271 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerStarted","Data":"77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.109854 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d226-account-create-update-7gc64" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.110891 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d226-account-create-update-7gc64" event={"ID":"4ad382ed-924d-4c03-88b2-63d89690a56a","Type":"ContainerDied","Data":"14f756c9d04da9228c97da74f1d1bbf739393fd403f464e8d09ae338dd94194f"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.110984 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14f756c9d04da9228c97da74f1d1bbf739393fd403f464e8d09ae338dd94194f" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.112755 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwrp2-config-k2cwc" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.112782 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwrp2-config-k2cwc" event={"ID":"2a1ee6a4-df6b-475f-89b5-2387d3664091","Type":"ContainerDied","Data":"4d7ec45dfec7c18bfb601f8431acbdb3c6a8e95fbad6f9a1130eb2d12aa29e66"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.112851 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7ec45dfec7c18bfb601f8431acbdb3c6a8e95fbad6f9a1130eb2d12aa29e66" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.119158 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43da-account-create-update-4whcc" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.119272 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43da-account-create-update-4whcc" event={"ID":"22d06cd8-5172-4755-93f0-6c6aa036bed8","Type":"ContainerDied","Data":"12421625aa6499759049a0d75177ae648ad7e0e2cd31f23558d670b3d4d0d249"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.119331 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12421625aa6499759049a0d75177ae648ad7e0e2cd31f23558d670b3d4d0d249" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.122554 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xh9m" event={"ID":"64657563-7e2f-46ef-a906-37e42398662a","Type":"ContainerStarted","Data":"6be1414eb15f0ac6ed0ef2cab14a7cb32708b69c107a79d057f310cc4c8112f8"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.128651 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1dcf-account-create-update-pmcnw" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.129139 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1dcf-account-create-update-pmcnw" event={"ID":"c1b70879-a5de-4ea1-9db1-82d9f0416a71","Type":"ContainerDied","Data":"6dd13b5251934be21cbee5261f844ddb690fdea9fa0db87bb45d0ffc338ae4c4"} Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.129178 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd13b5251934be21cbee5261f844ddb690fdea9fa0db87bb45d0ffc338ae4c4" Jan 09 11:04:35 crc kubenswrapper[4727]: E0109 11:04:35.131753 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-9gv8v" podUID="e5667805-aff5-4227-88df-2d2440259e9b" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.206882 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4xh9m" podStartSLOduration=3.004370555 podStartE2EDuration="24.206829342s" podCreationTimestamp="2026-01-09 11:04:11 +0000 UTC" firstStartedPulling="2026-01-09 11:04:13.078468458 +0000 UTC m=+1098.528373239" lastFinishedPulling="2026-01-09 11:04:34.280927245 +0000 UTC m=+1119.730832026" observedRunningTime="2026-01-09 11:04:35.191103721 +0000 UTC m=+1120.641008502" watchObservedRunningTime="2026-01-09 11:04:35.206829342 +0000 UTC m=+1120.656734123" Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.586589 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:35 crc kubenswrapper[4727]: I0109 11:04:35.614702 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mwrp2-config-k2cwc"] Jan 09 11:04:36 crc kubenswrapper[4727]: I0109 11:04:36.137267 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerStarted","Data":"716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30"} Jan 09 11:04:36 crc kubenswrapper[4727]: I0109 11:04:36.137658 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:36 crc kubenswrapper[4727]: I0109 11:04:36.180496 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" podStartSLOduration=14.180470749 podStartE2EDuration="14.180470749s" podCreationTimestamp="2026-01-09 11:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:04:36.176217507 +0000 UTC m=+1121.626122288" watchObservedRunningTime="2026-01-09 11:04:36.180470749 +0000 UTC m=+1121.630375540" Jan 09 11:04:36 crc kubenswrapper[4727]: I0109 11:04:36.871796 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1ee6a4-df6b-475f-89b5-2387d3664091" path="/var/lib/kubelet/pods/2a1ee6a4-df6b-475f-89b5-2387d3664091/volumes" Jan 09 11:04:41 crc kubenswrapper[4727]: I0109 11:04:41.190894 4727 generic.go:334] "Generic (PLEG): container finished" podID="64657563-7e2f-46ef-a906-37e42398662a" containerID="6be1414eb15f0ac6ed0ef2cab14a7cb32708b69c107a79d057f310cc4c8112f8" exitCode=0 Jan 09 11:04:41 crc kubenswrapper[4727]: I0109 11:04:41.190982 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xh9m" event={"ID":"64657563-7e2f-46ef-a906-37e42398662a","Type":"ContainerDied","Data":"6be1414eb15f0ac6ed0ef2cab14a7cb32708b69c107a79d057f310cc4c8112f8"} Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.634801 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.650316 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.706906 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.707222 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-rj6lv" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="dnsmasq-dns" containerID="cri-o://0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" gracePeriod=10 Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.768625 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") pod \"64657563-7e2f-46ef-a906-37e42398662a\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.768758 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") pod \"64657563-7e2f-46ef-a906-37e42398662a\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.768795 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") pod \"64657563-7e2f-46ef-a906-37e42398662a\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.768854 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") pod \"64657563-7e2f-46ef-a906-37e42398662a\" (UID: \"64657563-7e2f-46ef-a906-37e42398662a\") " Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.775526 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc" (OuterVolumeSpecName: "kube-api-access-5wgrc") pod "64657563-7e2f-46ef-a906-37e42398662a" (UID: "64657563-7e2f-46ef-a906-37e42398662a"). InnerVolumeSpecName "kube-api-access-5wgrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.775798 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "64657563-7e2f-46ef-a906-37e42398662a" (UID: "64657563-7e2f-46ef-a906-37e42398662a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.824048 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64657563-7e2f-46ef-a906-37e42398662a" (UID: "64657563-7e2f-46ef-a906-37e42398662a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.842013 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data" (OuterVolumeSpecName: "config-data") pod "64657563-7e2f-46ef-a906-37e42398662a" (UID: "64657563-7e2f-46ef-a906-37e42398662a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.870718 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.870755 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wgrc\" (UniqueName: \"kubernetes.io/projected/64657563-7e2f-46ef-a906-37e42398662a-kube-api-access-5wgrc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.870768 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:42 crc kubenswrapper[4727]: I0109 11:04:42.870779 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64657563-7e2f-46ef-a906-37e42398662a-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.090129 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.175968 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.176444 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.176605 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.176666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.176708 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") pod \"72decd78-911c-43ff-9f4e-0d99d71cf84b\" (UID: \"72decd78-911c-43ff-9f4e-0d99d71cf84b\") " Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.182617 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp" (OuterVolumeSpecName: "kube-api-access-pc6zp") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "kube-api-access-pc6zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216410 4727 generic.go:334] "Generic (PLEG): container finished" podID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerID="0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" exitCode=0 Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216503 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerDied","Data":"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd"} Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216601 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rj6lv" event={"ID":"72decd78-911c-43ff-9f4e-0d99d71cf84b","Type":"ContainerDied","Data":"ffedb3ad232e881de0ea53dc764b91e3e9e59a538e4dad9e3e9c68ecba16f3db"} Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216623 4727 scope.go:117] "RemoveContainer" containerID="0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.216949 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rj6lv" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.217864 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.218425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4xh9m" event={"ID":"64657563-7e2f-46ef-a906-37e42398662a","Type":"ContainerDied","Data":"863f21e160c716253c80003d82a8f94ef13eba15f96ed75ef0407b75d22b1fd7"} Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.218451 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="863f21e160c716253c80003d82a8f94ef13eba15f96ed75ef0407b75d22b1fd7" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.218544 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4xh9m" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.222033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.231076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.233864 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config" (OuterVolumeSpecName: "config") pod "72decd78-911c-43ff-9f4e-0d99d71cf84b" (UID: "72decd78-911c-43ff-9f4e-0d99d71cf84b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.250689 4727 scope.go:117] "RemoveContainer" containerID="e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.271132 4727 scope.go:117] "RemoveContainer" containerID="0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.271739 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd\": container with ID starting with 0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd not found: ID does not exist" containerID="0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.271783 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd"} err="failed to get container status \"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd\": rpc error: code = NotFound desc = could not find container \"0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd\": container with ID starting with 0d76f5fe52d9ae2c055acf5a0ada449a2ce9127bde70400d1179c1ed0eeb64cd not found: ID does not exist" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.271812 4727 scope.go:117] "RemoveContainer" containerID="e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.272320 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1\": container with ID starting with e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1 not found: ID does not exist" containerID="e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.272342 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1"} err="failed to get container status \"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1\": rpc error: code = NotFound desc = could not find container \"e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1\": container with ID starting with e3bc51a445e7dbe0a48d756aa4be568b6bfd3817643f634476ab2c5312347ce1 not found: ID does not exist" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278676 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278850 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278872 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278885 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72decd78-911c-43ff-9f4e-0d99d71cf84b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.278898 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc6zp\" (UniqueName: \"kubernetes.io/projected/72decd78-911c-43ff-9f4e-0d99d71cf84b-kube-api-access-pc6zp\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.593486 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.601434 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rj6lv"] Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.687739 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688090 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64657563-7e2f-46ef-a906-37e42398662a" containerName="glance-db-sync" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688103 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="64657563-7e2f-46ef-a906-37e42398662a" containerName="glance-db-sync" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688111 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46480603-3f1d-4589-ba8e-9026edee07c7" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688117 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="46480603-3f1d-4589-ba8e-9026edee07c7" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688125 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1ee6a4-df6b-475f-89b5-2387d3664091" containerName="ovn-config" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688131 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1ee6a4-df6b-475f-89b5-2387d3664091" containerName="ovn-config" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688143 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688766 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688785 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688791 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688805 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="dnsmasq-dns" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688810 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="dnsmasq-dns" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688835 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d06cd8-5172-4755-93f0-6c6aa036bed8" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688841 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d06cd8-5172-4755-93f0-6c6aa036bed8" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688849 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="init" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688854 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="init" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688863 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad382ed-924d-4c03-88b2-63d89690a56a" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688870 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad382ed-924d-4c03-88b2-63d89690a56a" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: E0109 11:04:43.688885 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108eb21f-902c-4942-8be4-9a3b11146c25" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.688891 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="108eb21f-902c-4942-8be4-9a3b11146c25" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689076 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689093 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d06cd8-5172-4755-93f0-6c6aa036bed8" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689099 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1ee6a4-df6b-475f-89b5-2387d3664091" containerName="ovn-config" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689112 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad382ed-924d-4c03-88b2-63d89690a56a" containerName="mariadb-account-create-update" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689125 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" containerName="dnsmasq-dns" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689133 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689142 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="108eb21f-902c-4942-8be4-9a3b11146c25" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689150 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="46480603-3f1d-4589-ba8e-9026edee07c7" containerName="mariadb-database-create" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.689159 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="64657563-7e2f-46ef-a906-37e42398662a" containerName="glance-db-sync" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.690230 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.712783 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813156 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813413 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813456 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.813488 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.914991 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915055 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915304 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.915454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916568 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916659 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.916761 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:43 crc kubenswrapper[4727]: I0109 11:04:43.962483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") pod \"dnsmasq-dns-7ff5475cc9-gszpb\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:44 crc kubenswrapper[4727]: I0109 11:04:44.012257 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:44 crc kubenswrapper[4727]: I0109 11:04:44.515571 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:04:44 crc kubenswrapper[4727]: I0109 11:04:44.871173 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72decd78-911c-43ff-9f4e-0d99d71cf84b" path="/var/lib/kubelet/pods/72decd78-911c-43ff-9f4e-0d99d71cf84b/volumes" Jan 09 11:04:45 crc kubenswrapper[4727]: I0109 11:04:45.245445 4727 generic.go:334] "Generic (PLEG): container finished" podID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerID="23887e416fde2f38fe612379b7307c055f64d771c7bc20bcd11032e3c0ea705c" exitCode=0 Jan 09 11:04:45 crc kubenswrapper[4727]: I0109 11:04:45.245529 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerDied","Data":"23887e416fde2f38fe612379b7307c055f64d771c7bc20bcd11032e3c0ea705c"} Jan 09 11:04:45 crc kubenswrapper[4727]: I0109 11:04:45.245574 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerStarted","Data":"3733ac359dd21c51d5f253b5404b05214c66a4c3eae7bdfe4843f65505ecec15"} Jan 09 11:04:46 crc kubenswrapper[4727]: I0109 11:04:46.256837 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerStarted","Data":"6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5"} Jan 09 11:04:46 crc kubenswrapper[4727]: I0109 11:04:46.257209 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:46 crc kubenswrapper[4727]: I0109 11:04:46.283878 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" podStartSLOduration=3.283839187 podStartE2EDuration="3.283839187s" podCreationTimestamp="2026-01-09 11:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:04:46.279245976 +0000 UTC m=+1131.729150837" watchObservedRunningTime="2026-01-09 11:04:46.283839187 +0000 UTC m=+1131.733743998" Jan 09 11:04:52 crc kubenswrapper[4727]: I0109 11:04:52.326128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gv8v" event={"ID":"e5667805-aff5-4227-88df-2d2440259e9b","Type":"ContainerStarted","Data":"9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef"} Jan 09 11:04:52 crc kubenswrapper[4727]: I0109 11:04:52.353674 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9gv8v" podStartSLOduration=3.305173895 podStartE2EDuration="34.35364736s" podCreationTimestamp="2026-01-09 11:04:18 +0000 UTC" firstStartedPulling="2026-01-09 11:04:20.305074489 +0000 UTC m=+1105.754979270" lastFinishedPulling="2026-01-09 11:04:51.353547954 +0000 UTC m=+1136.803452735" observedRunningTime="2026-01-09 11:04:52.350951312 +0000 UTC m=+1137.800856103" watchObservedRunningTime="2026-01-09 11:04:52.35364736 +0000 UTC m=+1137.803552181" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.014608 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.105102 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.105427 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="dnsmasq-dns" containerID="cri-o://716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30" gracePeriod=10 Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.376625 4727 generic.go:334] "Generic (PLEG): container finished" podID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerID="716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30" exitCode=0 Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.377027 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerDied","Data":"716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30"} Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.686009 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834122 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834664 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834698 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834744 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.834777 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.835044 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") pod \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\" (UID: \"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977\") " Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.861172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7" (OuterVolumeSpecName: "kube-api-access-h29v7") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "kube-api-access-h29v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.923023 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config" (OuterVolumeSpecName: "config") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.941897 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.943184 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.943221 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h29v7\" (UniqueName: \"kubernetes.io/projected/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-kube-api-access-h29v7\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.943234 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.945697 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.948598 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:54 crc kubenswrapper[4727]: I0109 11:04:54.955278 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" (UID: "b73609c1-ae60-4f6e-a0eb-e36b1fa9e977"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.045061 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.045104 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.045117 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.388419 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" event={"ID":"b73609c1-ae60-4f6e-a0eb-e36b1fa9e977","Type":"ContainerDied","Data":"77340686bbbb947fc45f984d1080557a4f70b32689248eca258bbdd2458ba4f0"} Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.388474 4727 scope.go:117] "RemoveContainer" containerID="716471d9a1a8dd8eac002f5e378835b54e592c8dc623314a7b9d0c79f4cc9b30" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.388501 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-s22jb" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.391529 4727 generic.go:334] "Generic (PLEG): container finished" podID="e5667805-aff5-4227-88df-2d2440259e9b" containerID="9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef" exitCode=0 Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.391544 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gv8v" event={"ID":"e5667805-aff5-4227-88df-2d2440259e9b","Type":"ContainerDied","Data":"9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef"} Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.425687 4727 scope.go:117] "RemoveContainer" containerID="305d595a75c0483e8f124c062e4312746f4a5e5e0df8f72d52d1280623e0cba4" Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.448331 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:55 crc kubenswrapper[4727]: I0109 11:04:55.460115 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-s22jb"] Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.735710 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.870767 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" path="/var/lib/kubelet/pods/b73609c1-ae60-4f6e-a0eb-e36b1fa9e977/volumes" Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.903190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") pod \"e5667805-aff5-4227-88df-2d2440259e9b\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.903422 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") pod \"e5667805-aff5-4227-88df-2d2440259e9b\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.903569 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") pod \"e5667805-aff5-4227-88df-2d2440259e9b\" (UID: \"e5667805-aff5-4227-88df-2d2440259e9b\") " Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.910338 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx" (OuterVolumeSpecName: "kube-api-access-kx4zx") pod "e5667805-aff5-4227-88df-2d2440259e9b" (UID: "e5667805-aff5-4227-88df-2d2440259e9b"). InnerVolumeSpecName "kube-api-access-kx4zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.933667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5667805-aff5-4227-88df-2d2440259e9b" (UID: "e5667805-aff5-4227-88df-2d2440259e9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:56 crc kubenswrapper[4727]: I0109 11:04:56.950919 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data" (OuterVolumeSpecName: "config-data") pod "e5667805-aff5-4227-88df-2d2440259e9b" (UID: "e5667805-aff5-4227-88df-2d2440259e9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.006337 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.006374 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx4zx\" (UniqueName: \"kubernetes.io/projected/e5667805-aff5-4227-88df-2d2440259e9b-kube-api-access-kx4zx\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.006412 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5667805-aff5-4227-88df-2d2440259e9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.418308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gv8v" event={"ID":"e5667805-aff5-4227-88df-2d2440259e9b","Type":"ContainerDied","Data":"33185353540e45e975c16eee3ad01875091fa7bf07d875d2c477b2502139451f"} Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.418372 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33185353540e45e975c16eee3ad01875091fa7bf07d875d2c477b2502139451f" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.419047 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gv8v" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.746866 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:04:57 crc kubenswrapper[4727]: E0109 11:04:57.747252 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5667805-aff5-4227-88df-2d2440259e9b" containerName="keystone-db-sync" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747265 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5667805-aff5-4227-88df-2d2440259e9b" containerName="keystone-db-sync" Jan 09 11:04:57 crc kubenswrapper[4727]: E0109 11:04:57.747276 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="dnsmasq-dns" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747283 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="dnsmasq-dns" Jan 09 11:04:57 crc kubenswrapper[4727]: E0109 11:04:57.747313 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="init" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747321 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="init" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747567 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73609c1-ae60-4f6e-a0eb-e36b1fa9e977" containerName="dnsmasq-dns" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.747579 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5667805-aff5-4227-88df-2d2440259e9b" containerName="keystone-db-sync" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.748461 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.764297 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.922960 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.923740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.923786 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.923851 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.923905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.924057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.929689 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.942386 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.954276 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.954871 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.955146 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dwjnt" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.955759 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.961555 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:04:57 crc kubenswrapper[4727]: I0109 11:04:57.991388 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030661 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030718 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030761 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030781 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030812 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030873 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.030906 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.032118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.032696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.033230 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.036300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.036477 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.084913 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") pod \"dnsmasq-dns-5c5cc7c5ff-56w9g\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137584 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137649 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137774 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.137826 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.166244 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.185917 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.189348 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.205718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.207539 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.218146 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") pod \"keystone-bootstrap-s6xvj\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.300332 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.342780 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.349841 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.370267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.416325 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9frsk" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.416672 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.416790 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.416915 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.439292 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445291 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445353 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445404 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.445430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.487829 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.489265 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.505028 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.505227 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zbdpv" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.513232 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.526942 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.532708 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.532929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.562943 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563068 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563181 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563260 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563312 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563335 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563414 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563501 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563581 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563651 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.563850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.570698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.570823 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.570897 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.564433 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.568320 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.580146 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.580579 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.588570 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.631640 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.633097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.639650 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.639904 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-f596n" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.640267 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.651323 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.665824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") pod \"horizon-9bd79bb5-sgxjp\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678311 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678411 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678434 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678479 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678519 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678566 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678611 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.678632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:04:58 crc kubenswrapper[4727]: I0109 11:04:58.679154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.685850 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.695939 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.703863 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.705158 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.710102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.711241 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.716321 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.739300 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") pod \"ceilometer-0\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.749398 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") pod \"barbican-db-sync-pss24\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.749879 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.755170 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.783128 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.784212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.784266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.784295 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.784872 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.796114 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.796336 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.796448 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fql5g" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.804005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.808690 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.814637 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.816526 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.826248 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.826349 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hx5p2" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.826541 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.839542 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") pod \"neutron-db-sync-mfhnm\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.850213 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.867214 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.892982 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893117 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893195 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893227 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893264 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893311 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893388 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893414 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.893450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.904078 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.931945 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.946303 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.950531 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.952658 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lsgwk" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.953441 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.953593 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.953738 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.974641 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.976714 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:58.987290 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.000002 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001556 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001657 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001681 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001714 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001768 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.001901 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.002074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.007922 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.016272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.017716 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.020605 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.033218 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.036073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.039172 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.039293 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.046484 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") pod \"placement-db-sync-56tkr\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.051184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.056617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.064721 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") pod \"cinder-db-sync-5c72l\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.074924 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.132611 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c72l" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139581 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139728 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139808 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139859 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.139916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.143671 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.143890 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.143925 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.144005 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.144071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.144099 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.173591 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.174276 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.175914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.191631 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.236612 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.238579 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.248871 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.248917 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.248953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.248980 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249017 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249081 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249185 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249248 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.249918 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.250175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.252628 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.252963 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.253006 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.253162 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.253501 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.253775 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.255133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.260095 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.263789 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.264766 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.265335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.266837 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") pod \"horizon-cf8ff49dc-bkwp8\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.273785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.284365 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.351869 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.351947 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352240 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352273 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352308 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352440 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352484 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352547 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352718 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352770 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.352866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.387888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.409654 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456039 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456171 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456199 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456233 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456351 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456385 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456413 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456438 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456576 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.456611 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.458332 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.463240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.464224 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.465040 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.465123 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.466350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.466532 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.467999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.471481 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.472634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.473272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.502825 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.521273 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.521427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") pod \"dnsmasq-dns-8b5c85b87-7llz6\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.521599 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.524424 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.586706 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:04:59.609776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.512534 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.571722 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.623220 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.653221 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.655745 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.682075 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.772346 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.793794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.793887 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.793926 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.793957 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.794023 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.834026 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896405 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896530 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896561 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896606 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.896693 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.897293 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.899627 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.900498 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.916486 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:00 crc kubenswrapper[4727]: I0109 11:05:00.917024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") pod \"horizon-95bf4c4d9-vwkb9\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.028561 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.472103 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.484087 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.496405 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.513709 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.537030 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:05:01 crc kubenswrapper[4727]: E0109 11:05:01.547032 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.549096 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:05:01 crc kubenswrapper[4727]: W0109 11:05:01.571845 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda52e2c52_54f3_4f0d_9244_1ce7563deb78.slice/crio-22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b WatchSource:0}: Error finding container 22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b: Status 404 returned error can't find the container with id 22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.582453 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:01 crc kubenswrapper[4727]: W0109 11:05:01.616368 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f7de868_87b0_49c7_ad5e_7c528f181550.slice/crio-a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb WatchSource:0}: Error finding container a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb: Status 404 returned error can't find the container with id a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.628590 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:05:01 crc kubenswrapper[4727]: W0109 11:05:01.639151 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3179052d_0a48_4988_9696_814faeb20563.slice/crio-829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb WatchSource:0}: Error finding container 829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb: Status 404 returned error can't find the container with id 829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.639293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9bd79bb5-sgxjp" event={"ID":"718817e7-7114-4473-84e7-56349b861c3e","Type":"ContainerStarted","Data":"4ab00658b972d762f35df32ce42e03171f3c7a20dae5a1fc6a4479d78d970b43"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.659803 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.668903 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerStarted","Data":"9f4d6e1e84339b6e76c479a4901b4c944d69b816e4b882b5ef6e50a8f5fbe884"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.672634 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-56tkr" event={"ID":"790d27d6-9817-413b-b711-f0be91104704","Type":"ContainerStarted","Data":"feb2b5d615adb3db7bf2469345647c3857babf723321591e5d776e3acdeded1e"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.674047 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pss24" event={"ID":"a52e2c52-54f3-4f0d-9244-1ce7563deb78","Type":"ContainerStarted","Data":"22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.675725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6xvj" event={"ID":"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44","Type":"ContainerStarted","Data":"afad1c35a086c45b0d71f6a0dcf1c838896cbf238adf7d23705b1d81b1e0c5fd"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.675757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6xvj" event={"ID":"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44","Type":"ContainerStarted","Data":"e973b0683b8f22a32c62f57073fcb6e661f17c5966136ce4933d8facf809d424"} Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.695850 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.707607 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:01 crc kubenswrapper[4727]: I0109 11:05:01.716078 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s6xvj" podStartSLOduration=4.716053719 podStartE2EDuration="4.716053719s" podCreationTimestamp="2026-01-09 11:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:01.705480766 +0000 UTC m=+1147.155385547" watchObservedRunningTime="2026-01-09 11:05:01.716053719 +0000 UTC m=+1147.165958490" Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.351562 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.712677 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerStarted","Data":"829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.718080 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mfhnm" event={"ID":"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1","Type":"ContainerStarted","Data":"61bc0d937c4302ec43f2337bd6079d8b8e9363e85a2c20cc0255fb3a8011cb0e"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.718110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mfhnm" event={"ID":"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1","Type":"ContainerStarted","Data":"9fd2e2efda6f0fdf02a478cc42de4e68614bf7eee26261246b1c15c40d9abd07"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.720664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c72l" event={"ID":"5f7de868-87b0-49c7-ad5e-7c528f181550","Type":"ContainerStarted","Data":"a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.726794 4727 generic.go:334] "Generic (PLEG): container finished" podID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerID="0517845b382f4761d9f5fcd66722857b845de8c6eb388211fc09443dd7611f06" exitCode=0 Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.726851 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerDied","Data":"0517845b382f4761d9f5fcd66722857b845de8c6eb388211fc09443dd7611f06"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.732269 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerStarted","Data":"235f1dbc729d8400ff61a870ff838d607f5f0556e4de01c9b178e4d7a4a3f9ca"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.749019 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-95bf4c4d9-vwkb9" event={"ID":"1accd238-8dda-4882-b66b-96aefeb84df4","Type":"ContainerStarted","Data":"931c8c326cbc00e09537bfff38f3cacf375f75e745d5be55085827239bd67b5e"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.795386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerStarted","Data":"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.795450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerStarted","Data":"161733537e9e43d072567aa0ebaf5bb7558fb6a7b7b38d11ea7ae89487092ac8"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.818126 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mfhnm" podStartSLOduration=4.81808074 podStartE2EDuration="4.81808074s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:02.739139555 +0000 UTC m=+1148.189044336" watchObservedRunningTime="2026-01-09 11:05:02.81808074 +0000 UTC m=+1148.267985531" Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.859642 4727 generic.go:334] "Generic (PLEG): container finished" podID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" containerID="35e35e5fffe61545ae2229c9a406ea280682b11d71b6da5a78e1848f4a83df3a" exitCode=0 Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.859930 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" event={"ID":"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f","Type":"ContainerDied","Data":"35e35e5fffe61545ae2229c9a406ea280682b11d71b6da5a78e1848f4a83df3a"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.859976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" event={"ID":"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f","Type":"ContainerStarted","Data":"668d8a9a9ca39af05b665b849588a7df468c64707de37a55a4948a01511a92ba"} Jan 09 11:05:02 crc kubenswrapper[4727]: I0109 11:05:02.884633 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cf8ff49dc-bkwp8" event={"ID":"19039fe6-ce4a-4e84-b355-9ed185f05060","Type":"ContainerStarted","Data":"a45c0fe9b2415ced716e83b8091dd784775539c8582b821d3ea575bffcd3c2b8"} Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.527045 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691308 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691363 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691462 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691532 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691601 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.691658 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") pod \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\" (UID: \"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f\") " Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.718964 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m" (OuterVolumeSpecName: "kube-api-access-lwz6m") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "kube-api-access-lwz6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.739416 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config" (OuterVolumeSpecName: "config") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.747553 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.748880 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.768209 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.779302 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" (UID: "7c3f9a1c-2ff1-4740-a36f-0bb73a50454f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796195 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796230 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwz6m\" (UniqueName: \"kubernetes.io/projected/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-kube-api-access-lwz6m\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796242 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796250 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796260 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.796268 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.940670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" event={"ID":"7c3f9a1c-2ff1-4740-a36f-0bb73a50454f","Type":"ContainerDied","Data":"668d8a9a9ca39af05b665b849588a7df468c64707de37a55a4948a01511a92ba"} Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.940741 4727 scope.go:117] "RemoveContainer" containerID="35e35e5fffe61545ae2229c9a406ea280682b11d71b6da5a78e1848f4a83df3a" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.940751 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-56w9g" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.951339 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerStarted","Data":"0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc"} Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.951406 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:03 crc kubenswrapper[4727]: I0109 11:05:03.980478 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" podStartSLOduration=5.980458042 podStartE2EDuration="5.980458042s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:03.977271411 +0000 UTC m=+1149.427176192" watchObservedRunningTime="2026-01-09 11:05:03.980458042 +0000 UTC m=+1149.430362823" Jan 09 11:05:04 crc kubenswrapper[4727]: I0109 11:05:04.028554 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:05:04 crc kubenswrapper[4727]: I0109 11:05:04.039056 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-56w9g"] Jan 09 11:05:04 crc kubenswrapper[4727]: I0109 11:05:04.890961 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" path="/var/lib/kubelet/pods/7c3f9a1c-2ff1-4740-a36f-0bb73a50454f/volumes" Jan 09 11:05:05 crc kubenswrapper[4727]: I0109 11:05:05.027000 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerStarted","Data":"6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4"} Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.064969 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerStarted","Data":"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b"} Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.065595 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-log" containerID="cri-o://d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" gracePeriod=30 Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.065682 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-httpd" containerID="cri-o://472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" gracePeriod=30 Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.075315 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerStarted","Data":"912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4"} Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.075558 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-log" containerID="cri-o://6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4" gracePeriod=30 Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.075721 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-httpd" containerID="cri-o://912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4" gracePeriod=30 Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.111444 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.111420377 podStartE2EDuration="8.111420377s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:06.107724902 +0000 UTC m=+1151.557629683" watchObservedRunningTime="2026-01-09 11:05:06.111420377 +0000 UTC m=+1151.561325158" Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.147263 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.147233855 podStartE2EDuration="8.147233855s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:06.142960633 +0000 UTC m=+1151.592865414" watchObservedRunningTime="2026-01-09 11:05:06.147233855 +0000 UTC m=+1151.597138636" Jan 09 11:05:06 crc kubenswrapper[4727]: I0109 11:05:06.948542 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.094780 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095116 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095201 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095238 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095255 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095289 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095340 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.095413 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") pod \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\" (UID: \"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.097375 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs" (OuterVolumeSpecName: "logs") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.098391 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.102933 4727 generic.go:334] "Generic (PLEG): container finished" podID="08d6e612-28e9-41fc-8409-799a7a033814" containerID="912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4" exitCode=0 Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.102972 4727 generic.go:334] "Generic (PLEG): container finished" podID="08d6e612-28e9-41fc-8409-799a7a033814" containerID="6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4" exitCode=143 Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.103012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerDied","Data":"912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.103041 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerDied","Data":"6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107032 4727 generic.go:334] "Generic (PLEG): container finished" podID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" exitCode=0 Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107065 4727 generic.go:334] "Generic (PLEG): container finished" podID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" exitCode=143 Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerDied","Data":"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerDied","Data":"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107116 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd960a0b-d875-4a0f-abfa-8b80ec3b5de6","Type":"ContainerDied","Data":"161733537e9e43d072567aa0ebaf5bb7558fb6a7b7b38d11ea7ae89487092ac8"} Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107134 4727 scope.go:117] "RemoveContainer" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107283 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.107928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.114171 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q" (OuterVolumeSpecName: "kube-api-access-gfw4q") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "kube-api-access-gfw4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.114616 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts" (OuterVolumeSpecName: "scripts") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.146778 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.164308 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.164904 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data" (OuterVolumeSpecName: "config-data") pod "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" (UID: "fd960a0b-d875-4a0f-abfa-8b80ec3b5de6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.173811 4727 scope.go:117] "RemoveContainer" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.198953 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200544 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200568 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200582 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200618 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200630 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200641 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200651 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfw4q\" (UniqueName: \"kubernetes.io/projected/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6-kube-api-access-gfw4q\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.200710 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.242327 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302292 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302470 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302498 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302542 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302633 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302710 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.302728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") pod \"08d6e612-28e9-41fc-8409-799a7a033814\" (UID: \"08d6e612-28e9-41fc-8409-799a7a033814\") " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.303043 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.303466 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.303495 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs" (OuterVolumeSpecName: "logs") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.303500 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.309774 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts" (OuterVolumeSpecName: "scripts") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.311688 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.312531 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf" (OuterVolumeSpecName: "kube-api-access-zmlqf") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "kube-api-access-zmlqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.316119 4727 scope.go:117] "RemoveContainer" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.329485 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": container with ID starting with 472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b not found: ID does not exist" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.329549 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b"} err="failed to get container status \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": rpc error: code = NotFound desc = could not find container \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": container with ID starting with 472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b not found: ID does not exist" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.329579 4727 scope.go:117] "RemoveContainer" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.331633 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": container with ID starting with d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5 not found: ID does not exist" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.331653 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5"} err="failed to get container status \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": rpc error: code = NotFound desc = could not find container \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": container with ID starting with d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5 not found: ID does not exist" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.331668 4727 scope.go:117] "RemoveContainer" containerID="472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.332081 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b"} err="failed to get container status \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": rpc error: code = NotFound desc = could not find container \"472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b\": container with ID starting with 472a8afae12d68c82ad024d9554ab52bf7bd121dbf09e26db21d90e96559634b not found: ID does not exist" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.332096 4727 scope.go:117] "RemoveContainer" containerID="d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.332291 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5"} err="failed to get container status \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": rpc error: code = NotFound desc = could not find container \"d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5\": container with ID starting with d970d0aeeab3da923dec62fed2a1fd972f4ca064f5fbd29e6ea68708651ce4c5 not found: ID does not exist" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.358578 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.360634 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.383105 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data" (OuterVolumeSpecName: "config-data") pod "08d6e612-28e9-41fc-8409-799a7a033814" (UID: "08d6e612-28e9-41fc-8409-799a7a033814"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.410363 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.410746 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.410816 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.410981 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.411064 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d6e612-28e9-41fc-8409-799a7a033814-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.411125 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmlqf\" (UniqueName: \"kubernetes.io/projected/08d6e612-28e9-41fc-8409-799a7a033814-kube-api-access-zmlqf\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.411177 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08d6e612-28e9-41fc-8409-799a7a033814-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.431150 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.487625 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.502105 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.516426 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.528906 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.530906 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.530987 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.531049 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" containerName="init" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531060 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" containerName="init" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.531079 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531090 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.531137 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531147 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.531172 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531180 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531565 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531626 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-log" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531651 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3f9a1c-2ff1-4740-a36f-0bb73a50454f" containerName="init" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531660 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d6e612-28e9-41fc-8409-799a7a033814" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.531669 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" containerName="glance-httpd" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.535412 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.542071 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.543832 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.582116 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.610320 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618434 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618666 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618700 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.618862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.652748 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.654393 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.662070 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.670278 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.700562 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:07 crc kubenswrapper[4727]: E0109 11:05:07.701440 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-xjqb7 logs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="18f7b91d-8aea-4cb4-bd21-3e29eadcf668" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725495 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725575 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725626 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725701 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725752 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725945 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.725982 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.726310 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.727722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.728775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.751274 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.751271 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.756438 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.758009 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.759035 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.763304 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.802029 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.815099 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57c89666d8-8fhd6"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.816789 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.833377 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.833497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.833581 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.833614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835166 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.835201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.840424 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57c89666d8-8fhd6"] Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.844280 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.847408 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.847718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.848028 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.864573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") pod \"horizon-7cbf5cf75b-vwxrh\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-scripts\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938680 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-secret-key\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-tls-certs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938754 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-config-data\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-combined-ca-bundle\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938913 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89031be7-ef50-45c8-b43f-b34f66012f21-logs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:07 crc kubenswrapper[4727]: I0109 11:05:07.938953 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5s2\" (UniqueName: \"kubernetes.io/projected/89031be7-ef50-45c8-b43f-b34f66012f21-kube-api-access-7g5s2\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.009291 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041043 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-combined-ca-bundle\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041093 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89031be7-ef50-45c8-b43f-b34f66012f21-logs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041134 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5s2\" (UniqueName: \"kubernetes.io/projected/89031be7-ef50-45c8-b43f-b34f66012f21-kube-api-access-7g5s2\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041219 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-scripts\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-secret-key\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041300 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-tls-certs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.041333 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-config-data\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.043557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-scripts\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.043583 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89031be7-ef50-45c8-b43f-b34f66012f21-config-data\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.043657 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89031be7-ef50-45c8-b43f-b34f66012f21-logs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.046169 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-secret-key\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.047245 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-horizon-tls-certs\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.051222 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89031be7-ef50-45c8-b43f-b34f66012f21-combined-ca-bundle\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.069131 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5s2\" (UniqueName: \"kubernetes.io/projected/89031be7-ef50-45c8-b43f-b34f66012f21-kube-api-access-7g5s2\") pod \"horizon-57c89666d8-8fhd6\" (UID: \"89031be7-ef50-45c8-b43f-b34f66012f21\") " pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.122031 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08d6e612-28e9-41fc-8409-799a7a033814","Type":"ContainerDied","Data":"235f1dbc729d8400ff61a870ff838d607f5f0556e4de01c9b178e4d7a4a3f9ca"} Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.122102 4727 scope.go:117] "RemoveContainer" containerID="912a3700a50ff07e9350ee2da745487a0c01cfb497b1d36700842699f8f37df4" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.122059 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.129710 4727 generic.go:334] "Generic (PLEG): container finished" podID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" containerID="afad1c35a086c45b0d71f6a0dcf1c838896cbf238adf7d23705b1d81b1e0c5fd" exitCode=0 Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.129794 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.130173 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6xvj" event={"ID":"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44","Type":"ContainerDied","Data":"afad1c35a086c45b0d71f6a0dcf1c838896cbf238adf7d23705b1d81b1e0c5fd"} Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.151518 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.156984 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249050 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249502 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249587 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249633 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249711 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249767 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.249798 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.250233 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs" (OuterVolumeSpecName: "logs") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.251244 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\" (UID: \"18f7b91d-8aea-4cb4-bd21-3e29eadcf668\") " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.252709 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.261751 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.261786 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.261890 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.265447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts" (OuterVolumeSpecName: "scripts") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.271272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.271493 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.274966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data" (OuterVolumeSpecName: "config-data") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.280090 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7" (OuterVolumeSpecName: "kube-api-access-xjqb7") pod "18f7b91d-8aea-4cb4-bd21-3e29eadcf668" (UID: "18f7b91d-8aea-4cb4-bd21-3e29eadcf668"). InnerVolumeSpecName "kube-api-access-xjqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.306740 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.325351 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.341338 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.343966 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.348732 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.349101 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.355195 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366863 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjqb7\" (UniqueName: \"kubernetes.io/projected/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-kube-api-access-xjqb7\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366902 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366917 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366929 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366943 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f7b91d-8aea-4cb4-bd21-3e29eadcf668-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.366979 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.389763 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468780 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468901 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468921 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468937 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.468980 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.469011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.469032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.469085 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.570866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.570935 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.570958 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.571819 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.572143 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.574361 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.594187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.594445 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.596096 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.604388 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.616480 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.662193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.682616 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.874266 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d6e612-28e9-41fc-8409-799a7a033814" path="/var/lib/kubelet/pods/08d6e612-28e9-41fc-8409-799a7a033814/volumes" Jan 09 11:05:08 crc kubenswrapper[4727]: I0109 11:05:08.875691 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd960a0b-d875-4a0f-abfa-8b80ec3b5de6" path="/var/lib/kubelet/pods/fd960a0b-d875-4a0f-abfa-8b80ec3b5de6/volumes" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.154274 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.253229 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.267403 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.288113 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.290124 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.293797 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.293904 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.302520 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.405389 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.405995 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.424941 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.424994 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425173 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425243 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.425702 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527716 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527792 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527856 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527883 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527935 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527970 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.527995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.528023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.528308 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.529484 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.529885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.535542 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.539264 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.539950 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.540602 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.546479 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.566114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.589725 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.620588 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.673226 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:05:09 crc kubenswrapper[4727]: I0109 11:05:09.674166 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" containerID="cri-o://6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5" gracePeriod=10 Jan 09 11:05:10 crc kubenswrapper[4727]: I0109 11:05:10.166624 4727 generic.go:334] "Generic (PLEG): container finished" podID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerID="6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5" exitCode=0 Jan 09 11:05:10 crc kubenswrapper[4727]: I0109 11:05:10.166675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerDied","Data":"6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5"} Jan 09 11:05:10 crc kubenswrapper[4727]: I0109 11:05:10.873993 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f7b91d-8aea-4cb4-bd21-3e29eadcf668" path="/var/lib/kubelet/pods/18f7b91d-8aea-4cb4-bd21-3e29eadcf668/volumes" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.515294 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.605679 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.605816 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.605907 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.606023 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.606056 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.606262 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") pod \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\" (UID: \"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44\") " Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.615236 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p" (OuterVolumeSpecName: "kube-api-access-wn82p") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "kube-api-access-wn82p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.616181 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.616987 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts" (OuterVolumeSpecName: "scripts") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.622926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.644352 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data" (OuterVolumeSpecName: "config-data") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.659581 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" (UID: "bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.713924 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn82p\" (UniqueName: \"kubernetes.io/projected/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-kube-api-access-wn82p\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.714352 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.714450 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.714555 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.714644 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: I0109 11:05:11.715407 4727 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:11 crc kubenswrapper[4727]: E0109 11:05:11.959757 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.203093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s6xvj" event={"ID":"bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44","Type":"ContainerDied","Data":"e973b0683b8f22a32c62f57073fcb6e661f17c5966136ce4933d8facf809d424"} Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.203138 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e973b0683b8f22a32c62f57073fcb6e661f17c5966136ce4933d8facf809d424" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.203194 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s6xvj" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.604926 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.611613 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s6xvj"] Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.706619 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:05:12 crc kubenswrapper[4727]: E0109 11:05:12.707090 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" containerName="keystone-bootstrap" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.707112 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" containerName="keystone-bootstrap" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.707296 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" containerName="keystone-bootstrap" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.707979 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.710094 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.710446 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.711017 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.711222 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.712036 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dwjnt" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.720585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840230 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840284 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840393 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.840654 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.874670 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44" path="/var/lib/kubelet/pods/bc8fc6c8-bd5e-47b9-b3ad-3c222872ec44/volumes" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.942936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.943011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.943058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.943118 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.944900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.945060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.948119 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.948858 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.954127 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.960888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.961308 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:12 crc kubenswrapper[4727]: I0109 11:05:12.963944 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") pod \"keystone-bootstrap-nd4pq\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:13 crc kubenswrapper[4727]: I0109 11:05:13.035566 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:14 crc kubenswrapper[4727]: I0109 11:05:14.014245 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.207689 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:22 crc kubenswrapper[4727]: I0109 11:05:22.745882 4727 scope.go:117] "RemoveContainer" containerID="6a3c042893562213645d3acb8a9c1c6befb715aebc16e60a0abea638c6b130b4" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.749764 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.749941 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h7h5bch5cfh57ch8h66ch5b4h5f5h699h66hb4h574hc7h65bh5d8hd8h677h79hc4h5bh5cch677h5b8h668h64bh8h58h5dfh79hcfh68cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hs2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-9bd79bb5-sgxjp_openstack(718817e7-7114-4473-84e7-56349b861c3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.752874 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-9bd79bb5-sgxjp" podUID="718817e7-7114-4473-84e7-56349b861c3e" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.766743 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.767045 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n667h698h5dchbch656h5cch5ddh585h5fch699h65bh99h68chcbh654h55dh5c4h588h5cfh76h75h5dh599h575h698hfbh5f5h9dh696h58dh8fh5dfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwh9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-cf8ff49dc-bkwp8_openstack(19039fe6-ce4a-4e84-b355-9ed185f05060): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:22 crc kubenswrapper[4727]: E0109 11:05:22.769078 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-cf8ff49dc-bkwp8" podUID="19039fe6-ce4a-4e84-b355-9ed185f05060" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.297699 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.297951 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n92p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pss24_openstack(a52e2c52-54f3-4f0d-9244-1ce7563deb78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.299219 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pss24" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.314424 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.314628 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68h644h57ch5bbh5d4hc7hddh586h5ddh57bh5b7h5d4hfbh68bh6bh9dh68bh5f6h54bh586h695h56h84h56ch7fh58fh5ffh559hf8h666h579h564q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtckg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-95bf4c4d9-vwkb9_openstack(1accd238-8dda-4882-b66b-96aefeb84df4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.326591 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-pss24" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" Jan 09 11:05:23 crc kubenswrapper[4727]: E0109 11:05:23.337861 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-95bf4c4d9-vwkb9" podUID="1accd238-8dda-4882-b66b-96aefeb84df4" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.485224 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583275 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583406 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583544 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583726 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.583805 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") pod \"863b94ea-e707-4c6a-8aa3-3241733e5257\" (UID: \"863b94ea-e707-4c6a-8aa3-3241733e5257\") " Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.600686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd" (OuterVolumeSpecName: "kube-api-access-ml5xd") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "kube-api-access-ml5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.644401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.668342 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.668456 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.682264 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697751 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml5xd\" (UniqueName: \"kubernetes.io/projected/863b94ea-e707-4c6a-8aa3-3241733e5257-kube-api-access-ml5xd\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697797 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697811 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697821 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.697832 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.698868 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config" (OuterVolumeSpecName: "config") pod "863b94ea-e707-4c6a-8aa3-3241733e5257" (UID: "863b94ea-e707-4c6a-8aa3-3241733e5257"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:23 crc kubenswrapper[4727]: I0109 11:05:23.799895 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863b94ea-e707-4c6a-8aa3-3241733e5257-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.013389 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.031108 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.336007 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" event={"ID":"863b94ea-e707-4c6a-8aa3-3241733e5257","Type":"ContainerDied","Data":"3733ac359dd21c51d5f253b5404b05214c66a4c3eae7bdfe4843f65505ecec15"} Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.336033 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-gszpb" Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.414941 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.425534 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-gszpb"] Jan 09 11:05:24 crc kubenswrapper[4727]: I0109 11:05:24.873106 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" path="/var/lib/kubelet/pods/863b94ea-e707-4c6a-8aa3-3241733e5257/volumes" Jan 09 11:05:27 crc kubenswrapper[4727]: I0109 11:05:27.375292 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" containerID="61bc0d937c4302ec43f2337bd6079d8b8e9363e85a2c20cc0255fb3a8011cb0e" exitCode=0 Jan 09 11:05:27 crc kubenswrapper[4727]: I0109 11:05:27.375487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mfhnm" event={"ID":"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1","Type":"ContainerDied","Data":"61bc0d937c4302ec43f2337bd6079d8b8e9363e85a2c20cc0255fb3a8011cb0e"} Jan 09 11:05:32 crc kubenswrapper[4727]: E0109 11:05:32.457789 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:33 crc kubenswrapper[4727]: E0109 11:05:33.825762 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 09 11:05:33 crc kubenswrapper[4727]: E0109 11:05:33.826378 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n667h7dh56ch8fh58dh8dh57h8chfh577h66bh9fh75hf5h555h644h75h58dhfch66h645hf7h689h579h55bh6fhdfh95h5b7h5d8hd8h56q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6746,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3179052d-0a48-4988-9696-814faeb20563): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:33 crc kubenswrapper[4727]: I0109 11:05:33.905245 4727 scope.go:117] "RemoveContainer" containerID="6c0e6a43dc3b84779bc7494f2c5e269d763cc56586926922b944a2958546bad5" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.048599 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.079933 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097754 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097836 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097916 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097964 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.097998 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098152 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") pod \"19039fe6-ce4a-4e84-b355-9ed185f05060\" (UID: \"19039fe6-ce4a-4e84-b355-9ed185f05060\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098271 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098307 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") pod \"718817e7-7114-4473-84e7-56349b861c3e\" (UID: \"718817e7-7114-4473-84e7-56349b861c3e\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.098941 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs" (OuterVolumeSpecName: "logs") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.099414 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data" (OuterVolumeSpecName: "config-data") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.102343 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs" (OuterVolumeSpecName: "logs") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.102350 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.103435 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data" (OuterVolumeSpecName: "config-data") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.105176 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts" (OuterVolumeSpecName: "scripts") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.105217 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts" (OuterVolumeSpecName: "scripts") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.110814 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.110872 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.111218 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b" (OuterVolumeSpecName: "kube-api-access-pwh9b") pod "19039fe6-ce4a-4e84-b355-9ed185f05060" (UID: "19039fe6-ce4a-4e84-b355-9ed185f05060"). InnerVolumeSpecName "kube-api-access-pwh9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.123395 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v" (OuterVolumeSpecName: "kube-api-access-2hs2v") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "kube-api-access-2hs2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.141982 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "718817e7-7114-4473-84e7-56349b861c3e" (UID: "718817e7-7114-4473-84e7-56349b861c3e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202554 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") pod \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202613 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") pod \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202705 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202749 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202878 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") pod \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\" (UID: \"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.202903 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.203138 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs" (OuterVolumeSpecName: "logs") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.203425 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.203556 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") pod \"1accd238-8dda-4882-b66b-96aefeb84df4\" (UID: \"1accd238-8dda-4882-b66b-96aefeb84df4\") " Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204057 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/718817e7-7114-4473-84e7-56349b861c3e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204073 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hs2v\" (UniqueName: \"kubernetes.io/projected/718817e7-7114-4473-84e7-56349b861c3e-kube-api-access-2hs2v\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204086 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwh9b\" (UniqueName: \"kubernetes.io/projected/19039fe6-ce4a-4e84-b355-9ed185f05060-kube-api-access-pwh9b\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204096 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19039fe6-ce4a-4e84-b355-9ed185f05060-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204105 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204117 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204126 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19039fe6-ce4a-4e84-b355-9ed185f05060-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204136 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/718817e7-7114-4473-84e7-56349b861c3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204145 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1accd238-8dda-4882-b66b-96aefeb84df4-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204121 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts" (OuterVolumeSpecName: "scripts") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204156 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/718817e7-7114-4473-84e7-56349b861c3e-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204165 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19039fe6-ce4a-4e84-b355-9ed185f05060-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.204147 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data" (OuterVolumeSpecName: "config-data") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.208429 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.208680 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp" (OuterVolumeSpecName: "kube-api-access-jsvmp") pod "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" (UID: "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1"). InnerVolumeSpecName "kube-api-access-jsvmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.211252 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg" (OuterVolumeSpecName: "kube-api-access-jtckg") pod "1accd238-8dda-4882-b66b-96aefeb84df4" (UID: "1accd238-8dda-4882-b66b-96aefeb84df4"). InnerVolumeSpecName "kube-api-access-jtckg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.234705 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config" (OuterVolumeSpecName: "config") pod "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" (UID: "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.240850 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" (UID: "0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.328957 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.328999 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtckg\" (UniqueName: \"kubernetes.io/projected/1accd238-8dda-4882-b66b-96aefeb84df4-kube-api-access-jtckg\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329012 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329023 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329033 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1accd238-8dda-4882-b66b-96aefeb84df4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329043 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1accd238-8dda-4882-b66b-96aefeb84df4-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.329052 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsvmp\" (UniqueName: \"kubernetes.io/projected/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1-kube-api-access-jsvmp\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.375759 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.451414 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9bd79bb5-sgxjp" event={"ID":"718817e7-7114-4473-84e7-56349b861c3e","Type":"ContainerDied","Data":"4ab00658b972d762f35df32ce42e03171f3c7a20dae5a1fc6a4479d78d970b43"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.451533 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9bd79bb5-sgxjp" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.457175 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mfhnm" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.457171 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mfhnm" event={"ID":"0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1","Type":"ContainerDied","Data":"9fd2e2efda6f0fdf02a478cc42de4e68614bf7eee26261246b1c15c40d9abd07"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.457366 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd2e2efda6f0fdf02a478cc42de4e68614bf7eee26261246b1c15c40d9abd07" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.459768 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-95bf4c4d9-vwkb9" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.459799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-95bf4c4d9-vwkb9" event={"ID":"1accd238-8dda-4882-b66b-96aefeb84df4","Type":"ContainerDied","Data":"931c8c326cbc00e09537bfff38f3cacf375f75e745d5be55085827239bd67b5e"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.466188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cf8ff49dc-bkwp8" event={"ID":"19039fe6-ce4a-4e84-b355-9ed185f05060","Type":"ContainerDied","Data":"a45c0fe9b2415ced716e83b8091dd784775539c8582b821d3ea575bffcd3c2b8"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.466394 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cf8ff49dc-bkwp8" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.479817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerStarted","Data":"64cc505548582ff0b92efe52617ea9736e870feb1d2d85557f334e68ae42a742"} Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.510090 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57c89666d8-8fhd6"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.529252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.538793 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-95bf4c4d9-vwkb9"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.592485 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.607587 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9bd79bb5-sgxjp"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.626110 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.638002 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cf8ff49dc-bkwp8"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.646319 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.654863 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.872372 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19039fe6-ce4a-4e84-b355-9ed185f05060" path="/var/lib/kubelet/pods/19039fe6-ce4a-4e84-b355-9ed185f05060/volumes" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.873056 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1accd238-8dda-4882-b66b-96aefeb84df4" path="/var/lib/kubelet/pods/1accd238-8dda-4882-b66b-96aefeb84df4/volumes" Jan 09 11:05:34 crc kubenswrapper[4727]: I0109 11:05:34.873738 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718817e7-7114-4473-84e7-56349b861c3e" path="/var/lib/kubelet/pods/718817e7-7114-4473-84e7-56349b861c3e/volumes" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.496465 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.497680 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" containerName="neutron-db-sync" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497698 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" containerName="neutron-db-sync" Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.497715 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="init" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497722 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="init" Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.497739 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497746 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497944 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="863b94ea-e707-4c6a-8aa3-3241733e5257" containerName="dnsmasq-dns" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.497961 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" containerName="neutron-db-sync" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.499313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.507379 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554059 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554187 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.554204 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.558853 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.598667 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.600238 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.605084 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.605492 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-f596n" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.605779 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.607453 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.636699 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.660959 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661098 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661142 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661401 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661590 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.661648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.662970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.663475 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.663824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.664322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.664341 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.706317 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") pod \"dnsmasq-dns-84b966f6c9-f9qzh\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.763522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.763652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.764057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.764097 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.764135 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.768686 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.769058 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.771755 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.782344 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.784555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") pod \"neutron-6bdfc77c64-cjzlr\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.829374 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:35 crc kubenswrapper[4727]: I0109 11:05:35.935953 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:35 crc kubenswrapper[4727]: W0109 11:05:35.965268 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0333d9ce_e537_4702_9180_533644b70869.slice/crio-12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967 WatchSource:0}: Error finding container 12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967: Status 404 returned error can't find the container with id 12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967 Jan 09 11:05:35 crc kubenswrapper[4727]: W0109 11:05:35.973667 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89031be7_ef50_45c8_b43f_b34f66012f21.slice/crio-8df443da3863ebd5bfda46f444d8e8888e17db1b551986837a32dfe4b05a1d2a WatchSource:0}: Error finding container 8df443da3863ebd5bfda46f444d8e8888e17db1b551986837a32dfe4b05a1d2a: Status 404 returned error can't find the container with id 8df443da3863ebd5bfda46f444d8e8888e17db1b551986837a32dfe4b05a1d2a Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.987883 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.988093 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zk2mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5c72l_openstack(5f7de868-87b0-49c7-ad5e-7c528f181550): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:05:35 crc kubenswrapper[4727]: E0109 11:05:35.989449 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5c72l" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.012440 4727 scope.go:117] "RemoveContainer" containerID="23887e416fde2f38fe612379b7307c055f64d771c7bc20bcd11032e3c0ea705c" Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.736016 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerStarted","Data":"f359bb60ecb5049a25ef11d10b22c031018c3de4d2dffb82f605df54479897f8"} Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.739770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerStarted","Data":"12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967"} Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.742243 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c89666d8-8fhd6" event={"ID":"89031be7-ef50-45c8-b43f-b34f66012f21","Type":"ContainerStarted","Data":"8df443da3863ebd5bfda46f444d8e8888e17db1b551986837a32dfe4b05a1d2a"} Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.753154 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nd4pq" event={"ID":"695f5777-ca94-4fee-9620-b22eb2a2d9ab","Type":"ContainerStarted","Data":"2c768efbf36053423a59f381c55f6c5e4834d9d1dc1f2715dfcf51c67b4323c0"} Jan 09 11:05:36 crc kubenswrapper[4727]: E0109 11:05:36.755708 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5c72l" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" Jan 09 11:05:36 crc kubenswrapper[4727]: I0109 11:05:36.766626 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.018011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.785141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pss24" event={"ID":"a52e2c52-54f3-4f0d-9244-1ce7563deb78","Type":"ContainerStarted","Data":"8ef6c402149050d5ff055a91a31e2129cc3c102e06f0b1d118c263501750d617"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.792786 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerStarted","Data":"d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.800815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-56tkr" event={"ID":"790d27d6-9817-413b-b711-f0be91104704","Type":"ContainerStarted","Data":"8c9da7dfda5f54940ae00f9c9f6c3b6698ce4b0778b3db11c1d23ada8f68d4ff"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.820691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerStarted","Data":"7b19e08e51c2187c9b787539a3d10f06721b0c9cd5e9e0ca48804bb7f658a9cf"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.825850 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pss24" podStartSLOduration=5.034709698 podStartE2EDuration="39.825791721s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="2026-01-09 11:05:01.581713445 +0000 UTC m=+1147.031618226" lastFinishedPulling="2026-01-09 11:05:36.372795468 +0000 UTC m=+1181.822700249" observedRunningTime="2026-01-09 11:05:37.810223481 +0000 UTC m=+1183.260128272" watchObservedRunningTime="2026-01-09 11:05:37.825791721 +0000 UTC m=+1183.275696502" Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.833209 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerStarted","Data":"4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.839595 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerStarted","Data":"a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.849701 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c89666d8-8fhd6" event={"ID":"89031be7-ef50-45c8-b43f-b34f66012f21","Type":"ContainerStarted","Data":"0352e91e3b6f8f354549c2a614d9810f2ab2a775ae1cfdf255339394fd79299c"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.851521 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-56tkr" podStartSLOduration=7.621451488 podStartE2EDuration="39.851478544s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="2026-01-09 11:05:01.613997341 +0000 UTC m=+1147.063902122" lastFinishedPulling="2026-01-09 11:05:33.844024397 +0000 UTC m=+1179.293929178" observedRunningTime="2026-01-09 11:05:37.832973995 +0000 UTC m=+1183.282878776" watchObservedRunningTime="2026-01-09 11:05:37.851478544 +0000 UTC m=+1183.301383325" Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.856891 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nd4pq" event={"ID":"695f5777-ca94-4fee-9620-b22eb2a2d9ab","Type":"ContainerStarted","Data":"84958f6b4b1fed9a71a0c9b91b8932532196b305e36de04af4bb1e1f000f02e6"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.888024 4727 generic.go:334] "Generic (PLEG): container finished" podID="4862f781-5a00-439d-94b4-f717ce6324a2" containerID="fa78dd1b9838a1b44c24a9243a4a8cf4ce653daa745e1f7f47ee7a4b1b469835" exitCode=0 Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.888387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerDied","Data":"fa78dd1b9838a1b44c24a9243a4a8cf4ce653daa745e1f7f47ee7a4b1b469835"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.888499 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerStarted","Data":"63091b70999aa18980c69d6d71c9c1317a8afc30e821bca924a95d321d78761c"} Jan 09 11:05:37 crc kubenswrapper[4727]: I0109 11:05:37.912147 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nd4pq" podStartSLOduration=25.912123993 podStartE2EDuration="25.912123993s" podCreationTimestamp="2026-01-09 11:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:37.906958393 +0000 UTC m=+1183.356863174" watchObservedRunningTime="2026-01-09 11:05:37.912123993 +0000 UTC m=+1183.362028774" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.073481 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8db497957-k8d9r"] Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.075497 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.079664 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.080742 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132240 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-combined-ca-bundle\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-public-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-ovndb-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132435 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnmk\" (UniqueName: \"kubernetes.io/projected/434346b3-08dc-43a6-aed9-3c00672c0c35-kube-api-access-pfnmk\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-internal-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.132522 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-httpd-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.167260 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8db497957-k8d9r"] Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-public-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235132 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-ovndb-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235162 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfnmk\" (UniqueName: \"kubernetes.io/projected/434346b3-08dc-43a6-aed9-3c00672c0c35-kube-api-access-pfnmk\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235219 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-internal-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-httpd-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.235387 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-combined-ca-bundle\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.240933 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-ovndb-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.253598 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-combined-ca-bundle\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.254770 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-internal-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.255627 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-public-tls-certs\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.258495 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-httpd-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.267123 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/434346b3-08dc-43a6-aed9-3c00672c0c35-config\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.272323 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfnmk\" (UniqueName: \"kubernetes.io/projected/434346b3-08dc-43a6-aed9-3c00672c0c35-kube-api-access-pfnmk\") pod \"neutron-8db497957-k8d9r\" (UID: \"434346b3-08dc-43a6-aed9-3c00672c0c35\") " pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.494279 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.929866 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerStarted","Data":"7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265"} Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.954395 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c89666d8-8fhd6" event={"ID":"89031be7-ef50-45c8-b43f-b34f66012f21","Type":"ContainerStarted","Data":"c96756fd46cc12528b047fad2396bce2cf5d57a6749b34484d3494f0b5561760"} Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.976840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerStarted","Data":"4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3"} Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.977352 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.977416 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cbf5cf75b-vwxrh" podStartSLOduration=31.449320988 podStartE2EDuration="31.977400987s" podCreationTimestamp="2026-01-09 11:05:07 +0000 UTC" firstStartedPulling="2026-01-09 11:05:35.965657795 +0000 UTC m=+1181.415562576" lastFinishedPulling="2026-01-09 11:05:36.493737794 +0000 UTC m=+1181.943642575" observedRunningTime="2026-01-09 11:05:38.96234532 +0000 UTC m=+1184.412250131" watchObservedRunningTime="2026-01-09 11:05:38.977400987 +0000 UTC m=+1184.427305788" Jan 09 11:05:38 crc kubenswrapper[4727]: I0109 11:05:38.980125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerStarted","Data":"e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab"} Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.002272 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57c89666d8-8fhd6" podStartSLOduration=31.202293306 podStartE2EDuration="32.002246807s" podCreationTimestamp="2026-01-09 11:05:07 +0000 UTC" firstStartedPulling="2026-01-09 11:05:35.994530524 +0000 UTC m=+1181.444435295" lastFinishedPulling="2026-01-09 11:05:36.794484015 +0000 UTC m=+1182.244388796" observedRunningTime="2026-01-09 11:05:38.995178926 +0000 UTC m=+1184.445083707" watchObservedRunningTime="2026-01-09 11:05:39.002246807 +0000 UTC m=+1184.452151608" Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.030765 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerStarted","Data":"69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd"} Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.030817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerStarted","Data":"be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717"} Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.030885 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.062479 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" podStartSLOduration=4.062458583 podStartE2EDuration="4.062458583s" podCreationTimestamp="2026-01-09 11:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:39.031042275 +0000 UTC m=+1184.480947076" watchObservedRunningTime="2026-01-09 11:05:39.062458583 +0000 UTC m=+1184.512363364" Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.071270 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bdfc77c64-cjzlr" podStartSLOduration=4.0712464409999995 podStartE2EDuration="4.071246441s" podCreationTimestamp="2026-01-09 11:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:39.061800246 +0000 UTC m=+1184.511705057" watchObservedRunningTime="2026-01-09 11:05:39.071246441 +0000 UTC m=+1184.521151222" Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.233091 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8db497957-k8d9r"] Jan 09 11:05:39 crc kubenswrapper[4727]: W0109 11:05:39.238994 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod434346b3_08dc_43a6_aed9_3c00672c0c35.slice/crio-fbd3e3f933cbfb248bd19ca48b2c973f3135ee5847732f45189f970e679775ac WatchSource:0}: Error finding container fbd3e3f933cbfb248bd19ca48b2c973f3135ee5847732f45189f970e679775ac: Status 404 returned error can't find the container with id fbd3e3f933cbfb248bd19ca48b2c973f3135ee5847732f45189f970e679775ac Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.405374 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:05:39 crc kubenswrapper[4727]: I0109 11:05:39.405437 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.039157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerStarted","Data":"cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87"} Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.045128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerStarted","Data":"a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db"} Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.049012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8db497957-k8d9r" event={"ID":"434346b3-08dc-43a6-aed9-3c00672c0c35","Type":"ContainerStarted","Data":"fbd3e3f933cbfb248bd19ca48b2c973f3135ee5847732f45189f970e679775ac"} Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.083208 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=32.083175624 podStartE2EDuration="32.083175624s" podCreationTimestamp="2026-01-09 11:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:40.067035929 +0000 UTC m=+1185.516940720" watchObservedRunningTime="2026-01-09 11:05:40.083175624 +0000 UTC m=+1185.533080405" Jan 09 11:05:40 crc kubenswrapper[4727]: I0109 11:05:40.104535 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.10447494 podStartE2EDuration="31.10447494s" podCreationTimestamp="2026-01-09 11:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:40.09893501 +0000 UTC m=+1185.548839801" watchObservedRunningTime="2026-01-09 11:05:40.10447494 +0000 UTC m=+1185.554379731" Jan 09 11:05:41 crc kubenswrapper[4727]: I0109 11:05:41.070521 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8db497957-k8d9r" event={"ID":"434346b3-08dc-43a6-aed9-3c00672c0c35","Type":"ContainerStarted","Data":"1276a8edafd070711298dd9ee6f8a38bb57278e90a13eca9c8ccbb2e3e5d6729"} Jan 09 11:05:42 crc kubenswrapper[4727]: I0109 11:05:42.099408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8db497957-k8d9r" event={"ID":"434346b3-08dc-43a6-aed9-3c00672c0c35","Type":"ContainerStarted","Data":"1d1aef94470a805f45904c85d0b95ad7dd7b81e684e648b2bb2867bb3d32604d"} Jan 09 11:05:42 crc kubenswrapper[4727]: I0109 11:05:42.100298 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:05:42 crc kubenswrapper[4727]: I0109 11:05:42.126046 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8db497957-k8d9r" podStartSLOduration=4.126024614 podStartE2EDuration="4.126024614s" podCreationTimestamp="2026-01-09 11:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:42.117781933 +0000 UTC m=+1187.567686734" watchObservedRunningTime="2026-01-09 11:05:42.126024614 +0000 UTC m=+1187.575929405" Jan 09 11:05:42 crc kubenswrapper[4727]: E0109 11:05:42.802454 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:43 crc kubenswrapper[4727]: I0109 11:05:43.113330 4727 generic.go:334] "Generic (PLEG): container finished" podID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" containerID="84958f6b4b1fed9a71a0c9b91b8932532196b305e36de04af4bb1e1f000f02e6" exitCode=0 Jan 09 11:05:43 crc kubenswrapper[4727]: I0109 11:05:43.113535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nd4pq" event={"ID":"695f5777-ca94-4fee-9620-b22eb2a2d9ab","Type":"ContainerDied","Data":"84958f6b4b1fed9a71a0c9b91b8932532196b305e36de04af4bb1e1f000f02e6"} Jan 09 11:05:43 crc kubenswrapper[4727]: I0109 11:05:43.117200 4727 generic.go:334] "Generic (PLEG): container finished" podID="790d27d6-9817-413b-b711-f0be91104704" containerID="8c9da7dfda5f54940ae00f9c9f6c3b6698ce4b0778b3db11c1d23ada8f68d4ff" exitCode=0 Jan 09 11:05:43 crc kubenswrapper[4727]: I0109 11:05:43.117265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-56tkr" event={"ID":"790d27d6-9817-413b-b711-f0be91104704","Type":"ContainerDied","Data":"8c9da7dfda5f54940ae00f9c9f6c3b6698ce4b0778b3db11c1d23ada8f68d4ff"} Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.136533 4727 generic.go:334] "Generic (PLEG): container finished" podID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" containerID="8ef6c402149050d5ff055a91a31e2129cc3c102e06f0b1d118c263501750d617" exitCode=0 Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.136645 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pss24" event={"ID":"a52e2c52-54f3-4f0d-9244-1ce7563deb78","Type":"ContainerDied","Data":"8ef6c402149050d5ff055a91a31e2129cc3c102e06f0b1d118c263501750d617"} Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.830916 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.912010 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:45 crc kubenswrapper[4727]: I0109 11:05:45.912436 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="dnsmasq-dns" containerID="cri-o://0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc" gracePeriod=10 Jan 09 11:05:46 crc kubenswrapper[4727]: I0109 11:05:46.159785 4727 generic.go:334] "Generic (PLEG): container finished" podID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerID="0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc" exitCode=0 Jan 09 11:05:46 crc kubenswrapper[4727]: I0109 11:05:46.159821 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerDied","Data":"0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc"} Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.715529 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.724190 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.769677 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.873967 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894603 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894718 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") pod \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894738 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894803 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894832 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") pod \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") pod \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\" (UID: \"a52e2c52-54f3-4f0d-9244-1ce7563deb78\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894899 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.894990 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.895018 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.895076 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.895119 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") pod \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\" (UID: \"695f5777-ca94-4fee-9620-b22eb2a2d9ab\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.895145 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") pod \"790d27d6-9817-413b-b711-f0be91104704\" (UID: \"790d27d6-9817-413b-b711-f0be91104704\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.902838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts" (OuterVolumeSpecName: "scripts") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.914311 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs" (OuterVolumeSpecName: "logs") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.915774 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j" (OuterVolumeSpecName: "kube-api-access-6tq6j") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "kube-api-access-6tq6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.921846 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2" (OuterVolumeSpecName: "kube-api-access-n92p2") pod "a52e2c52-54f3-4f0d-9244-1ce7563deb78" (UID: "a52e2c52-54f3-4f0d-9244-1ce7563deb78"). InnerVolumeSpecName "kube-api-access-n92p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.924474 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.934318 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a52e2c52-54f3-4f0d-9244-1ce7563deb78" (UID: "a52e2c52-54f3-4f0d-9244-1ce7563deb78"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.940665 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.941920 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz" (OuterVolumeSpecName: "kube-api-access-bnppz") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "kube-api-access-bnppz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.942036 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts" (OuterVolumeSpecName: "scripts") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.948630 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.964660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data" (OuterVolumeSpecName: "config-data") pod "790d27d6-9817-413b-b711-f0be91104704" (UID: "790d27d6-9817-413b-b711-f0be91104704"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.968768 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data" (OuterVolumeSpecName: "config-data") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.992025 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695f5777-ca94-4fee-9620-b22eb2a2d9ab" (UID: "695f5777-ca94-4fee-9620-b22eb2a2d9ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997167 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997230 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997398 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997497 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.997647 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") pod \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\" (UID: \"bf11a72b-70ce-401b-aed0-21ce9c1fcf71\") " Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998222 4727 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998245 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnppz\" (UniqueName: \"kubernetes.io/projected/695f5777-ca94-4fee-9620-b22eb2a2d9ab-kube-api-access-bnppz\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998265 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998277 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998288 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998299 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998310 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998321 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790d27d6-9817-413b-b711-f0be91104704-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998331 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998341 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695f5777-ca94-4fee-9620-b22eb2a2d9ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998352 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790d27d6-9817-413b-b711-f0be91104704-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998363 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n92p2\" (UniqueName: \"kubernetes.io/projected/a52e2c52-54f3-4f0d-9244-1ce7563deb78-kube-api-access-n92p2\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:47 crc kubenswrapper[4727]: I0109 11:05:47.998374 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tq6j\" (UniqueName: \"kubernetes.io/projected/790d27d6-9817-413b-b711-f0be91104704-kube-api-access-6tq6j\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.000953 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a52e2c52-54f3-4f0d-9244-1ce7563deb78" (UID: "a52e2c52-54f3-4f0d-9244-1ce7563deb78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.008087 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc" (OuterVolumeSpecName: "kube-api-access-g95hc") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "kube-api-access-g95hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.010995 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.011040 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.013341 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.049967 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.052877 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config" (OuterVolumeSpecName: "config") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.054830 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.063820 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.073755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf11a72b-70ce-401b-aed0-21ce9c1fcf71" (UID: "bf11a72b-70ce-401b-aed0-21ce9c1fcf71"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103396 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103458 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52e2c52-54f3-4f0d-9244-1ce7563deb78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103477 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103492 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g95hc\" (UniqueName: \"kubernetes.io/projected/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-kube-api-access-g95hc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103533 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103549 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.103562 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf11a72b-70ce-401b-aed0-21ce9c1fcf71-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.160069 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.160672 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.161201 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57c89666d8-8fhd6" podUID="89031be7-ef50-45c8-b43f-b34f66012f21" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.187223 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pss24" event={"ID":"a52e2c52-54f3-4f0d-9244-1ce7563deb78","Type":"ContainerDied","Data":"22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.187271 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22339eb4dd8a082857ba09740bb52b9fe1e7d1d45d5d71000bba848d376a977b" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.187329 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pss24" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.200083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nd4pq" event={"ID":"695f5777-ca94-4fee-9620-b22eb2a2d9ab","Type":"ContainerDied","Data":"2c768efbf36053423a59f381c55f6c5e4834d9d1dc1f2715dfcf51c67b4323c0"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.200141 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c768efbf36053423a59f381c55f6c5e4834d9d1dc1f2715dfcf51c67b4323c0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.200231 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nd4pq" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.205853 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerStarted","Data":"bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.217127 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.217681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7llz6" event={"ID":"bf11a72b-70ce-401b-aed0-21ce9c1fcf71","Type":"ContainerDied","Data":"9f4d6e1e84339b6e76c479a4901b4c944d69b816e4b882b5ef6e50a8f5fbe884"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.217780 4727 scope.go:117] "RemoveContainer" containerID="0f814435953eb697512f07353de5b3958009ab602f7b669d0d110986ef5126fc" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.229120 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-56tkr" event={"ID":"790d27d6-9817-413b-b711-f0be91104704","Type":"ContainerDied","Data":"feb2b5d615adb3db7bf2469345647c3857babf723321591e5d776e3acdeded1e"} Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.229202 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feb2b5d615adb3db7bf2469345647c3857babf723321591e5d776e3acdeded1e" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.229348 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-56tkr" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.259722 4727 scope.go:117] "RemoveContainer" containerID="0517845b382f4761d9f5fcd66722857b845de8c6eb388211fc09443dd7611f06" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.284109 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.297097 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7llz6"] Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.683842 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.683912 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.750089 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.751278 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.873019 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" path="/var/lib/kubelet/pods/bf11a72b-70ce-401b-aed0-21ce9c1fcf71/volumes" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.902978 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-666857844b-c2hp6"] Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903438 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="dnsmasq-dns" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903457 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="dnsmasq-dns" Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903475 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="init" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903483 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="init" Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903498 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790d27d6-9817-413b-b711-f0be91104704" containerName="placement-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903528 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="790d27d6-9817-413b-b711-f0be91104704" containerName="placement-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903543 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" containerName="keystone-bootstrap" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903549 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" containerName="keystone-bootstrap" Jan 09 11:05:48 crc kubenswrapper[4727]: E0109 11:05:48.903560 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" containerName="barbican-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903569 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" containerName="barbican-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903783 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" containerName="barbican-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903830 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" containerName="keystone-bootstrap" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903850 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="790d27d6-9817-413b-b711-f0be91104704" containerName="placement-db-sync" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.903860 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf11a72b-70ce-401b-aed0-21ce9c1fcf71" containerName="dnsmasq-dns" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.904599 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.909547 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.909809 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.909958 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.910270 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dwjnt" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.910419 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.912151 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 09 11:05:48 crc kubenswrapper[4727]: I0109 11:05:48.998938 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-666857844b-c2hp6"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.021473 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85c4f6b76d-7zrx8"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.023358 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-scripts\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024453 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtfz\" (UniqueName: \"kubernetes.io/projected/3738e7aa-d182-43a0-962c-b735526851f2-kube-api-access-2mtfz\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-combined-ca-bundle\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024644 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-public-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024674 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-fernet-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024697 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-credential-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024724 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-config-data\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.024748 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-internal-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.040418 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.040456 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.040742 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hx5p2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.040813 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.043906 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.069553 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85c4f6b76d-7zrx8"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-combined-ca-bundle\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128322 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgpl6\" (UniqueName: \"kubernetes.io/projected/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-kube-api-access-wgpl6\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-scripts\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-public-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128422 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-config-data\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-public-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128461 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-logs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-fernet-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.128518 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-credential-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138441 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-76fd5dd86c-tmlx2"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-config-data\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-internal-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138733 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-internal-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138801 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-scripts\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138891 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mtfz\" (UniqueName: \"kubernetes.io/projected/3738e7aa-d182-43a0-962c-b735526851f2-kube-api-access-2mtfz\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.138934 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-combined-ca-bundle\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.151881 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.153759 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-fernet-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.159263 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-scripts\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.175585 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zbdpv" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.175665 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-internal-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.175869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-public-tls-certs\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.176334 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-credential-keys\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.176748 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.176894 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-config-data\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.177385 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.184180 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3738e7aa-d182-43a0-962c-b735526851f2-combined-ca-bundle\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.221985 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-d89df6ff4-gzcbx"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.223685 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.227999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mtfz\" (UniqueName: \"kubernetes.io/projected/3738e7aa-d182-43a0-962c-b735526851f2-kube-api-access-2mtfz\") pod \"keystone-666857844b-c2hp6\" (UID: \"3738e7aa-d182-43a0-962c-b735526851f2\") " pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.228634 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.229384 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.250828 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-internal-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251251 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tw4\" (UniqueName: \"kubernetes.io/projected/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-kube-api-access-66tw4\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-combined-ca-bundle\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251454 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251527 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgpl6\" (UniqueName: \"kubernetes.io/projected/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-kube-api-access-wgpl6\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-logs\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-scripts\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251766 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-public-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.251825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-combined-ca-bundle\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.252647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data-custom\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.252733 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-config-data\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.252759 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-logs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.253421 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-logs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.261827 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76fd5dd86c-tmlx2"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.262594 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-internal-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.264154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-public-tls-certs\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.286976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-scripts\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.304503 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-combined-ca-bundle\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.312171 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-config-data\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.314903 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgpl6\" (UniqueName: \"kubernetes.io/projected/f588c09f-34b7-4bf1-89f2-0f967cf6ddd6-kube-api-access-wgpl6\") pod \"placement-85c4f6b76d-7zrx8\" (UID: \"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6\") " pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.314977 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d89df6ff4-gzcbx"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-combined-ca-bundle\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-logs\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-combined-ca-bundle\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354457 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r442g\" (UniqueName: \"kubernetes.io/projected/b166264d-8575-47af-88f1-c569c71c84f1-kube-api-access-r442g\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354492 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data-custom\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354542 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b166264d-8575-47af-88f1-c569c71c84f1-logs\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354602 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data-custom\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tw4\" (UniqueName: \"kubernetes.io/projected/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-kube-api-access-66tw4\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354682 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.354762 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.358079 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-logs\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.358874 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.363627 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.365294 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.370226 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data-custom\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.375284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-combined-ca-bundle\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.385603 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.400644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-config-data\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.443352 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.445823 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.447681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tw4\" (UniqueName: \"kubernetes.io/projected/97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8-kube-api-access-66tw4\") pod \"barbican-worker-76fd5dd86c-tmlx2\" (UID: \"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8\") " pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.473136 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.473825 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-combined-ca-bundle\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.474972 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r442g\" (UniqueName: \"kubernetes.io/projected/b166264d-8575-47af-88f1-c569c71c84f1-kube-api-access-r442g\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.489072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b166264d-8575-47af-88f1-c569c71c84f1-logs\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.494489 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-combined-ca-bundle\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.495975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b166264d-8575-47af-88f1-c569c71c84f1-logs\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.489308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data-custom\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.531310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data-custom\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.544183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b166264d-8575-47af-88f1-c569c71c84f1-config-data\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.571063 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r442g\" (UniqueName: \"kubernetes.io/projected/b166264d-8575-47af-88f1-c569c71c84f1-kube-api-access-r442g\") pod \"barbican-keystone-listener-d89df6ff4-gzcbx\" (UID: \"b166264d-8575-47af-88f1-c569c71c84f1\") " pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605148 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605174 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605196 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.605352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.628865 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.628938 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.640727 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.644745 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.664369 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.664383 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.707761 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.708062 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.708094 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.708119 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.708218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.727710 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.756158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.765704 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.768212 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.769384 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.783121 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.786881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.798121 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.841908 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.855599 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") pod \"dnsmasq-dns-75c8ddd69c-jd4fj\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.866841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.866898 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.866924 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.866971 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.867000 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.902963 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.970237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.970805 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.970879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.970915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.971072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.972021 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.974550 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.988212 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.989244 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:49 crc kubenswrapper[4727]: I0109 11:05:49.996596 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.032308 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") pod \"barbican-api-bbb58d5f8-5wxbz\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.169497 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-666857844b-c2hp6"] Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.270656 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.428751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-666857844b-c2hp6" event={"ID":"3738e7aa-d182-43a0-962c-b735526851f2","Type":"ContainerStarted","Data":"257f29c478f2f77d8ab87459adef8e54d8f9120fb2557bda6c875b56f4b692c0"} Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.430425 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.430447 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.516499 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85c4f6b76d-7zrx8"] Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.694181 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76fd5dd86c-tmlx2"] Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.732088 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d89df6ff4-gzcbx"] Jan 09 11:05:50 crc kubenswrapper[4727]: I0109 11:05:50.969422 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.101962 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.460881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-666857844b-c2hp6" event={"ID":"3738e7aa-d182-43a0-962c-b735526851f2","Type":"ContainerStarted","Data":"e9b5249158ce3c47b8a9559ecd7b24c7cd40e97071bbb03a35b034f4a8af741b"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.461627 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.476326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c4f6b76d-7zrx8" event={"ID":"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6","Type":"ContainerStarted","Data":"51e102ab73001250171aeac8da56c6cd9138cc4141d819cd5dcfd8ccd9ccc759"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.476387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c4f6b76d-7zrx8" event={"ID":"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6","Type":"ContainerStarted","Data":"f4462cb2c6255c4b2ca225a032cf1f4564d40101cd3c020b2b0cb2a26b3e0ac3"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.476401 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85c4f6b76d-7zrx8" event={"ID":"f588c09f-34b7-4bf1-89f2-0f967cf6ddd6","Type":"ContainerStarted","Data":"38f806d77cf5116373985d1661bba0c86d97111858ff7ddbbd1805becd8aa786"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.477391 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.477425 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.485270 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" event={"ID":"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8","Type":"ContainerStarted","Data":"8c38992963841723c9d58d1535b601e9e02c8e7703f2e2b913e36a9b1392ce64"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.496664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerStarted","Data":"5bc7bb7ac89ce392430ac7e65ff0eb04ba2048df225717424e45329a79f0c64a"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.506370 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-666857844b-c2hp6" podStartSLOduration=3.506338579 podStartE2EDuration="3.506338579s" podCreationTimestamp="2026-01-09 11:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:51.489251968 +0000 UTC m=+1196.939156769" watchObservedRunningTime="2026-01-09 11:05:51.506338579 +0000 UTC m=+1196.956243360" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.518807 4727 generic.go:334] "Generic (PLEG): container finished" podID="c987342c-3221-479b-9298-cdf7c85e22cd" containerID="976be790afea6d4b89ec035b128ead320d45ad49b962862d4715341f9c9e16da" exitCode=0 Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.518908 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerDied","Data":"976be790afea6d4b89ec035b128ead320d45ad49b962862d4715341f9c9e16da"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.518945 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerStarted","Data":"2fc1bd7230fec540cd4a334d07ebbdb4b06f434463e354143dc267a731f76be2"} Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.532653 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85c4f6b76d-7zrx8" podStartSLOduration=3.532625069 podStartE2EDuration="3.532625069s" podCreationTimestamp="2026-01-09 11:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:51.513077851 +0000 UTC m=+1196.962982642" watchObservedRunningTime="2026-01-09 11:05:51.532625069 +0000 UTC m=+1196.982529850" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.556816 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.556835 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:51 crc kubenswrapper[4727]: I0109 11:05:51.564184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" event={"ID":"b166264d-8575-47af-88f1-c569c71c84f1","Type":"ContainerStarted","Data":"322485e94d04b07a0628480ea332d510b19bc2e880861e5537ca397a02f6be32"} Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.392870 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5456d7bfcd-5bs8c"] Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.395885 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.404131 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.404422 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.413516 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5456d7bfcd-5bs8c"] Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470236 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470358 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-internal-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470418 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef4869f-d107-4f5b-a136-166de8ac7a69-logs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470465 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvf98\" (UniqueName: \"kubernetes.io/projected/fef4869f-d107-4f5b-a136-166de8ac7a69-kube-api-access-mvf98\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470525 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-public-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470568 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-combined-ca-bundle\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.470623 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data-custom\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573710 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef4869f-d107-4f5b-a136-166de8ac7a69-logs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvf98\" (UniqueName: \"kubernetes.io/projected/fef4869f-d107-4f5b-a136-166de8ac7a69-kube-api-access-mvf98\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-public-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-combined-ca-bundle\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573947 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data-custom\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.573988 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.574046 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-internal-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.577686 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef4869f-d107-4f5b-a136-166de8ac7a69-logs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.587987 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-combined-ca-bundle\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.588423 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.590028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerStarted","Data":"3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc"} Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.590092 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerStarted","Data":"087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68"} Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.591318 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.591807 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.597322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-config-data-custom\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.598052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-public-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.601728 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef4869f-d107-4f5b-a136-166de8ac7a69-internal-tls-certs\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.615572 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerStarted","Data":"7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8"} Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.617332 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.617371 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.619243 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvf98\" (UniqueName: \"kubernetes.io/projected/fef4869f-d107-4f5b-a136-166de8ac7a69-kube-api-access-mvf98\") pod \"barbican-api-5456d7bfcd-5bs8c\" (UID: \"fef4869f-d107-4f5b-a136-166de8ac7a69\") " pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.642540 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podStartSLOduration=3.642493557 podStartE2EDuration="3.642493557s" podCreationTimestamp="2026-01-09 11:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:52.628225312 +0000 UTC m=+1198.078130093" watchObservedRunningTime="2026-01-09 11:05:52.642493557 +0000 UTC m=+1198.092398358" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.662643 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" podStartSLOduration=3.662623611 podStartE2EDuration="3.662623611s" podCreationTimestamp="2026-01-09 11:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:52.662050185 +0000 UTC m=+1198.111954976" watchObservedRunningTime="2026-01-09 11:05:52.662623611 +0000 UTC m=+1198.112528392" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.722264 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.722484 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.724743 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 11:05:52 crc kubenswrapper[4727]: I0109 11:05:52.754109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:53 crc kubenswrapper[4727]: E0109 11:05:53.066329 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5667805_aff5_4227_88df_2d2440259e9b.slice/crio-conmon-9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.494863 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.495684 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.631947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c72l" event={"ID":"5f7de868-87b0-49c7-ad5e-7c528f181550","Type":"ContainerStarted","Data":"3f10c6f5c18146a5828c011f330fbca4b0beff7019c56065bfcca5a0b8a923d4"} Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.632676 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:05:53 crc kubenswrapper[4727]: I0109 11:05:53.664427 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5c72l" podStartSLOduration=5.278280011 podStartE2EDuration="55.66439648s" podCreationTimestamp="2026-01-09 11:04:58 +0000 UTC" firstStartedPulling="2026-01-09 11:05:01.639200844 +0000 UTC m=+1147.089105635" lastFinishedPulling="2026-01-09 11:05:52.025317323 +0000 UTC m=+1197.475222104" observedRunningTime="2026-01-09 11:05:53.649617831 +0000 UTC m=+1199.099522622" watchObservedRunningTime="2026-01-09 11:05:53.66439648 +0000 UTC m=+1199.114301261" Jan 09 11:05:54 crc kubenswrapper[4727]: I0109 11:05:54.886163 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5456d7bfcd-5bs8c"] Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.676961 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5456d7bfcd-5bs8c" event={"ID":"fef4869f-d107-4f5b-a136-166de8ac7a69","Type":"ContainerStarted","Data":"cd27e0c259a7636ad03573906691b98d7c85ce2fc932733052406dc3d928b297"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.677012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5456d7bfcd-5bs8c" event={"ID":"fef4869f-d107-4f5b-a136-166de8ac7a69","Type":"ContainerStarted","Data":"734fa0136e05f8beda10d4f9902f1c2c1bb5e6f7f274719e7505e85690430187"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.677022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5456d7bfcd-5bs8c" event={"ID":"fef4869f-d107-4f5b-a136-166de8ac7a69","Type":"ContainerStarted","Data":"afaf8224bc057f25ff4fcfa18b1facecd43324f8ea9d02f371f078902fe74684"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.677038 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.677049 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.679068 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" event={"ID":"b166264d-8575-47af-88f1-c569c71c84f1","Type":"ContainerStarted","Data":"dce98ec6c97926cab4955d81108e3efa253aa7aac5a89692c0a5f350ce898868"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.679117 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" event={"ID":"b166264d-8575-47af-88f1-c569c71c84f1","Type":"ContainerStarted","Data":"b0ce95c84516095056682d41fc1627e7bc2a93ae506e1ccc59847e696dee4555"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.698415 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" event={"ID":"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8","Type":"ContainerStarted","Data":"d07943b7feb486e00606a9c38566812b04b5b34da0b212acbacd4649165f14a7"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.698465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" event={"ID":"97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8","Type":"ContainerStarted","Data":"aec1b8c4d148dfd7407c7dce49b35ddf868dfc11e52729dcfd894bb065394bd4"} Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.715028 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5456d7bfcd-5bs8c" podStartSLOduration=3.71500706 podStartE2EDuration="3.71500706s" podCreationTimestamp="2026-01-09 11:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:05:55.70794865 +0000 UTC m=+1201.157853461" watchObservedRunningTime="2026-01-09 11:05:55.71500706 +0000 UTC m=+1201.164911831" Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.741979 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-76fd5dd86c-tmlx2" podStartSLOduration=3.174287705 podStartE2EDuration="6.741953528s" podCreationTimestamp="2026-01-09 11:05:49 +0000 UTC" firstStartedPulling="2026-01-09 11:05:50.725652519 +0000 UTC m=+1196.175557300" lastFinishedPulling="2026-01-09 11:05:54.293318342 +0000 UTC m=+1199.743223123" observedRunningTime="2026-01-09 11:05:55.723065978 +0000 UTC m=+1201.172970779" watchObservedRunningTime="2026-01-09 11:05:55.741953528 +0000 UTC m=+1201.191858309" Jan 09 11:05:55 crc kubenswrapper[4727]: I0109 11:05:55.777016 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-d89df6ff4-gzcbx" podStartSLOduration=3.226642518 podStartE2EDuration="6.776990444s" podCreationTimestamp="2026-01-09 11:05:49 +0000 UTC" firstStartedPulling="2026-01-09 11:05:50.743621064 +0000 UTC m=+1196.193525845" lastFinishedPulling="2026-01-09 11:05:54.29396899 +0000 UTC m=+1199.743873771" observedRunningTime="2026-01-09 11:05:55.74425078 +0000 UTC m=+1201.194155561" watchObservedRunningTime="2026-01-09 11:05:55.776990444 +0000 UTC m=+1201.226895225" Jan 09 11:05:58 crc kubenswrapper[4727]: I0109 11:05:58.011500 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:05:58 crc kubenswrapper[4727]: I0109 11:05:58.160144 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57c89666d8-8fhd6" podUID="89031be7-ef50-45c8-b43f-b34f66012f21" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 09 11:05:58 crc kubenswrapper[4727]: I0109 11:05:58.731243 4727 generic.go:334] "Generic (PLEG): container finished" podID="5f7de868-87b0-49c7-ad5e-7c528f181550" containerID="3f10c6f5c18146a5828c011f330fbca4b0beff7019c56065bfcca5a0b8a923d4" exitCode=0 Jan 09 11:05:58 crc kubenswrapper[4727]: I0109 11:05:58.731336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c72l" event={"ID":"5f7de868-87b0-49c7-ad5e-7c528f181550","Type":"ContainerDied","Data":"3f10c6f5c18146a5828c011f330fbca4b0beff7019c56065bfcca5a0b8a923d4"} Jan 09 11:05:59 crc kubenswrapper[4727]: I0109 11:05:59.977952 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.052887 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.053125 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" containerID="cri-o://4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3" gracePeriod=10 Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.788623 4727 generic.go:334] "Generic (PLEG): container finished" podID="4862f781-5a00-439d-94b4-f717ce6324a2" containerID="4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3" exitCode=0 Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.788690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerDied","Data":"4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3"} Jan 09 11:06:00 crc kubenswrapper[4727]: I0109 11:06:00.830274 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.746003 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c72l" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.822173 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5c72l" event={"ID":"5f7de868-87b0-49c7-ad5e-7c528f181550","Type":"ContainerDied","Data":"a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb"} Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.822229 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c218f0b746e4d8d3d4d5b059bc752653bb61c05d58b8ff2fbeaf4d39d42ebb" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.822242 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5c72l" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924255 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924364 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924404 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924436 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924561 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924596 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.924629 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") pod \"5f7de868-87b0-49c7-ad5e-7c528f181550\" (UID: \"5f7de868-87b0-49c7-ad5e-7c528f181550\") " Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.925990 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f7de868-87b0-49c7-ad5e-7c528f181550-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.937326 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg" (OuterVolumeSpecName: "kube-api-access-zk2mg") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "kube-api-access-zk2mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.940824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.941080 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts" (OuterVolumeSpecName: "scripts") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:01 crc kubenswrapper[4727]: I0109 11:06:01.960742 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.004066 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data" (OuterVolumeSpecName: "config-data") pod "5f7de868-87b0-49c7-ad5e-7c528f181550" (UID: "5f7de868-87b0-49c7-ad5e-7c528f181550"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028706 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk2mg\" (UniqueName: \"kubernetes.io/projected/5f7de868-87b0-49c7-ad5e-7c528f181550-kube-api-access-zk2mg\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028744 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028763 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028779 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.028791 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f7de868-87b0-49c7-ad5e-7c528f181550-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.035908 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:06:02 crc kubenswrapper[4727]: E0109 11:06:02.095271 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="3179052d-0a48-4988-9696-814faeb20563" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129281 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129362 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129459 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129554 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129573 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.129612 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") pod \"4862f781-5a00-439d-94b4-f717ce6324a2\" (UID: \"4862f781-5a00-439d-94b4-f717ce6324a2\") " Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.139660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45" (OuterVolumeSpecName: "kube-api-access-fkh45") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "kube-api-access-fkh45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.187076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.192019 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config" (OuterVolumeSpecName: "config") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.197127 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.206696 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.219924 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4862f781-5a00-439d-94b4-f717ce6324a2" (UID: "4862f781-5a00-439d-94b4-f717ce6324a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232089 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkh45\" (UniqueName: \"kubernetes.io/projected/4862f781-5a00-439d-94b4-f717ce6324a2-kube-api-access-fkh45\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232152 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232166 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232176 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232208 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.232218 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4862f781-5a00-439d-94b4-f717ce6324a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.449050 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.835779 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.835751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-f9qzh" event={"ID":"4862f781-5a00-439d-94b4-f717ce6324a2","Type":"ContainerDied","Data":"63091b70999aa18980c69d6d71c9c1317a8afc30e821bca924a95d321d78761c"} Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.835982 4727 scope.go:117] "RemoveContainer" containerID="4ebadf4fd6baea25ec608185888f0581847df51a5ca82a7f32dded54f080e9a3" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.840649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerStarted","Data":"95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688"} Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.841686 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="ceilometer-notification-agent" containerID="cri-o://e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab" gracePeriod=30 Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.842621 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.842614 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="sg-core" containerID="cri-o://bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451" gracePeriod=30 Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.842728 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="proxy-httpd" containerID="cri-o://95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688" gracePeriod=30 Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.900631 4727 scope.go:117] "RemoveContainer" containerID="fa78dd1b9838a1b44c24a9243a4a8cf4ce653daa745e1f7f47ee7a4b1b469835" Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.906650 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:06:02 crc kubenswrapper[4727]: I0109 11:06:02.950743 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-f9qzh"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.309340 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:03 crc kubenswrapper[4727]: E0109 11:06:03.310094 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" containerName="cinder-db-sync" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310188 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" containerName="cinder-db-sync" Jan 09 11:06:03 crc kubenswrapper[4727]: E0109 11:06:03.310267 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="init" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310323 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="init" Jan 09 11:06:03 crc kubenswrapper[4727]: E0109 11:06:03.310392 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310447 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310756 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" containerName="dnsmasq-dns" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.310846 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" containerName="cinder-db-sync" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.312129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.319198 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.319423 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fql5g" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.319558 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.319746 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.345960 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.525791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526143 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526189 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526922 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.526994 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: E0109 11:06:03.539205 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3179052d_0a48_4988_9696_814faeb20563.slice/crio-conmon-bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.549591 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.583421 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.594655 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.604060 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.606427 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.612082 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.616924 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.660847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.660922 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.660953 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.660983 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661019 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661040 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661061 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661109 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661169 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661196 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661216 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.661353 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.691072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.692992 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.703038 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.703791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.704799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") pod \"cinder-scheduler-0\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767617 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767656 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767698 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767729 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767789 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767810 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767827 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767868 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767896 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767913 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.767956 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.768374 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.778692 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.778785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.786338 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.787336 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.790649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.809709 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") pod \"cinder-api-0\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.875858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.875962 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.876015 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.876039 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.876107 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.876127 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.877737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.878050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.878554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.878702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.880584 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.905664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") pod \"dnsmasq-dns-5784cf869f-q44wc\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.907942 4727 generic.go:334] "Generic (PLEG): container finished" podID="3179052d-0a48-4988-9696-814faeb20563" containerID="95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688" exitCode=0 Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.907990 4727 generic.go:334] "Generic (PLEG): container finished" podID="3179052d-0a48-4988-9696-814faeb20563" containerID="bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451" exitCode=2 Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.908020 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerDied","Data":"95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688"} Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.908058 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerDied","Data":"bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451"} Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.929166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.958334 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:03 crc kubenswrapper[4727]: I0109 11:06:03.999926 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.025269 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.657067 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.726570 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.955212 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4862f781-5a00-439d-94b4-f717ce6324a2" path="/var/lib/kubelet/pods/4862f781-5a00-439d-94b4-f717ce6324a2/volumes" Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.956642 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.991441 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerStarted","Data":"46e0819a2a4dd76f55beafd0dd463399c99fccea0ca8d438850be56e9391306d"} Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.994700 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerStarted","Data":"ce83cb6536bed5f69863a9bc02f546d105aa9cecf6f79078fbb71dfb9bf0d4f6"} Jan 09 11:06:04 crc kubenswrapper[4727]: I0109 11:06:04.999744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerStarted","Data":"1fc9e9988fd4856268dac8faebd8ec23ba321d236e5bf07d0594fdfe44867d1e"} Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.246825 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.344585 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.411808 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5456d7bfcd-5bs8c" Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.492199 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.492421 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" containerID="cri-o://087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68" gracePeriod=30 Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.492935 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" containerID="cri-o://3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc" gracePeriod=30 Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.501596 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Jan 09 11:06:05 crc kubenswrapper[4727]: I0109 11:06:05.958818 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.038287 4727 generic.go:334] "Generic (PLEG): container finished" podID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerID="40bb9476bfc07b9354c89f5cbef3057e68cde163c53908f4d6837e2be7ee3f19" exitCode=0 Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.038413 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerDied","Data":"40bb9476bfc07b9354c89f5cbef3057e68cde163c53908f4d6837e2be7ee3f19"} Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.091925 4727 generic.go:334] "Generic (PLEG): container finished" podID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerID="087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68" exitCode=143 Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.091988 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerDied","Data":"087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68"} Jan 09 11:06:06 crc kubenswrapper[4727]: I0109 11:06:06.095420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerStarted","Data":"89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0"} Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.131335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerStarted","Data":"4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c"} Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.131736 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api-log" containerID="cri-o://89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0" gracePeriod=30 Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.131966 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api" containerID="cri-o://4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c" gracePeriod=30 Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.132098 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.140727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerStarted","Data":"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e"} Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.144492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerStarted","Data":"8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713"} Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.144898 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.162174 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.162151683 podStartE2EDuration="4.162151683s" podCreationTimestamp="2026-01-09 11:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:07.157796895 +0000 UTC m=+1212.607701676" watchObservedRunningTime="2026-01-09 11:06:07.162151683 +0000 UTC m=+1212.612056464" Jan 09 11:06:07 crc kubenswrapper[4727]: I0109 11:06:07.192170 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" podStartSLOduration=4.192148523 podStartE2EDuration="4.192148523s" podCreationTimestamp="2026-01-09 11:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:07.185915504 +0000 UTC m=+1212.635820305" watchObservedRunningTime="2026-01-09 11:06:07.192148523 +0000 UTC m=+1212.642053304" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.219326 4727 generic.go:334] "Generic (PLEG): container finished" podID="3d0f92bc-9d54-4382-b822-064c339799c4" containerID="89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0" exitCode=143 Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.219402 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerDied","Data":"89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0"} Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.278983 4727 generic.go:334] "Generic (PLEG): container finished" podID="3179052d-0a48-4988-9696-814faeb20563" containerID="e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab" exitCode=0 Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.279354 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerDied","Data":"e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab"} Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.298618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerStarted","Data":"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c"} Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.341133 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.256574424 podStartE2EDuration="5.341108917s" podCreationTimestamp="2026-01-09 11:06:03 +0000 UTC" firstStartedPulling="2026-01-09 11:06:04.895760257 +0000 UTC m=+1210.345665038" lastFinishedPulling="2026-01-09 11:06:05.98029475 +0000 UTC m=+1211.430199531" observedRunningTime="2026-01-09 11:06:08.340544091 +0000 UTC m=+1213.790448882" watchObservedRunningTime="2026-01-09 11:06:08.341108917 +0000 UTC m=+1213.791013698" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.437976 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.515299 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8db497957-k8d9r" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522318 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522586 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522675 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522843 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.522926 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.523028 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.523160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") pod \"3179052d-0a48-4988-9696-814faeb20563\" (UID: \"3179052d-0a48-4988-9696-814faeb20563\") " Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.523897 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.524022 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.530992 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts" (OuterVolumeSpecName: "scripts") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.534213 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746" (OuterVolumeSpecName: "kube-api-access-p6746") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "kube-api-access-p6746". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.586063 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.607427 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.607798 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bdfc77c64-cjzlr" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-api" containerID="cri-o://be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717" gracePeriod=30 Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.608615 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bdfc77c64-cjzlr" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-httpd" containerID="cri-o://69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd" gracePeriod=30 Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627925 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627956 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627968 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6746\" (UniqueName: \"kubernetes.io/projected/3179052d-0a48-4988-9696-814faeb20563-kube-api-access-p6746\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627976 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.627987 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3179052d-0a48-4988-9696-814faeb20563-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.668707 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.672823 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data" (OuterVolumeSpecName: "config-data") pod "3179052d-0a48-4988-9696-814faeb20563" (UID: "3179052d-0a48-4988-9696-814faeb20563"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.731829 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:08 crc kubenswrapper[4727]: I0109 11:06:08.731870 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3179052d-0a48-4988-9696-814faeb20563-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.004180 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.313534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3179052d-0a48-4988-9696-814faeb20563","Type":"ContainerDied","Data":"829560b6dfae72c191d23e414414ea22cbcd6bffd85c7a9af78641c121643beb"} Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.313632 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.315523 4727 scope.go:117] "RemoveContainer" containerID="95fc11fa0208881ee41933f76cd879db0f819e1423d39cb6c4b647484bd21688" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.317447 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerDied","Data":"69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd"} Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.317440 4727 generic.go:334] "Generic (PLEG): container finished" podID="29996e65-8eab-4604-a8ca-cac1063478fd" containerID="69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd" exitCode=0 Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.338574 4727 scope.go:117] "RemoveContainer" containerID="bbc0577f1a3ceb503a3354657fe517f889c62d37d5ed56bf5b32324c080ac451" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.365866 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.375065 4727 scope.go:117] "RemoveContainer" containerID="e8e7a17856d86789b93f98f81dd76d15749727af63483668eeeab9adadbd03ab" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.404972 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.408579 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.408650 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.408725 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.409774 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.409847 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c" gracePeriod=600 Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.429018 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:09 crc kubenswrapper[4727]: E0109 11:06:09.429710 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="sg-core" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.429732 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="sg-core" Jan 09 11:06:09 crc kubenswrapper[4727]: E0109 11:06:09.429767 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="ceilometer-notification-agent" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.429775 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="ceilometer-notification-agent" Jan 09 11:06:09 crc kubenswrapper[4727]: E0109 11:06:09.429792 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="proxy-httpd" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.429799 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="proxy-httpd" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.430020 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="ceilometer-notification-agent" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.430031 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="proxy-httpd" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.430058 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3179052d-0a48-4988-9696-814faeb20563" containerName="sg-core" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.432173 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.436032 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.437460 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.437953 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558656 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558815 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.558991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.559228 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661384 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661584 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661617 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.661674 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.662696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.662742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.668489 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.670951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.671470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.671536 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.687221 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") pod \"ceilometer-0\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.764043 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:09 crc kubenswrapper[4727]: I0109 11:06:09.948695 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:35426->10.217.0.163:9311: read: connection reset by peer" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.322742 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.375665 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c" exitCode=0 Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.375830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c"} Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.375872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a"} Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.375903 4727 scope.go:117] "RemoveContainer" containerID="d625973ce5423fb42fb573adc41ab816f0dd98828f87bbfec9d546169c7aa639" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.390479 4727 generic.go:334] "Generic (PLEG): container finished" podID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerID="3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc" exitCode=0 Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.390787 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerDied","Data":"3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc"} Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.429732 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.512676 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.647080 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.720959 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.721047 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.721162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.721242 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.721312 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") pod \"7283b7d5-d972-4c78-ac33-72488eedabf2\" (UID: \"7283b7d5-d972-4c78-ac33-72488eedabf2\") " Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.722729 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs" (OuterVolumeSpecName: "logs") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.732287 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.734771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb" (OuterVolumeSpecName: "kube-api-access-wwhzb") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "kube-api-access-wwhzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.768784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.786232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data" (OuterVolumeSpecName: "config-data") pod "7283b7d5-d972-4c78-ac33-72488eedabf2" (UID: "7283b7d5-d972-4c78-ac33-72488eedabf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823889 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwhzb\" (UniqueName: \"kubernetes.io/projected/7283b7d5-d972-4c78-ac33-72488eedabf2-kube-api-access-wwhzb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823936 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823948 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7283b7d5-d972-4c78-ac33-72488eedabf2-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823958 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.823968 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7283b7d5-d972-4c78-ac33-72488eedabf2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:10 crc kubenswrapper[4727]: I0109 11:06:10.871316 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3179052d-0a48-4988-9696-814faeb20563" path="/var/lib/kubelet/pods/3179052d-0a48-4988-9696-814faeb20563/volumes" Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.409288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226"} Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.409684 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"c35551f5fd2325dd8ded3e2242e43e59a4eeb9e347df7aa845f106c0ffc6e15c"} Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.411533 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bbb58d5f8-5wxbz" Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.411547 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bbb58d5f8-5wxbz" event={"ID":"7283b7d5-d972-4c78-ac33-72488eedabf2","Type":"ContainerDied","Data":"5bc7bb7ac89ce392430ac7e65ff0eb04ba2048df225717424e45329a79f0c64a"} Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.411618 4727 scope.go:117] "RemoveContainer" containerID="3290940d98cb1b592fcc6799f480ce595161eccf97bbcce9c02ee8e848f1fbfc" Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.441574 4727 scope.go:117] "RemoveContainer" containerID="087768cdd73ed065a66b22962288396d1e38c719517729e6dd8a6b51654c4e68" Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.442930 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:06:11 crc kubenswrapper[4727]: I0109 11:06:11.466338 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-bbb58d5f8-5wxbz"] Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.163720 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.389389 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57c89666d8-8fhd6" Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.429691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a"} Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.477816 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.478114 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon-log" containerID="cri-o://d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3" gracePeriod=30 Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.478682 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" containerID="cri-o://7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265" gracePeriod=30 Jan 09 11:06:12 crc kubenswrapper[4727]: I0109 11:06:12.872048 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" path="/var/lib/kubelet/pods/7283b7d5-d972-4c78-ac33-72488eedabf2/volumes" Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.445232 4727 generic.go:334] "Generic (PLEG): container finished" podID="29996e65-8eab-4604-a8ca-cac1063478fd" containerID="be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717" exitCode=0 Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.445314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerDied","Data":"be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717"} Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.448033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2"} Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.867709 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.932068 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:06:13 crc kubenswrapper[4727]: I0109 11:06:13.999867 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:13.999971 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.000076 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.000183 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.000230 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") pod \"29996e65-8eab-4604-a8ca-cac1063478fd\" (UID: \"29996e65-8eab-4604-a8ca-cac1063478fd\") " Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.009405 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.009674 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" containerID="cri-o://7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8" gracePeriod=10 Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.009078 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.021592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l" (OuterVolumeSpecName: "kube-api-access-mz54l") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "kube-api-access-mz54l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.085483 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.094385 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config" (OuterVolumeSpecName: "config") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.102847 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.102881 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.102893 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz54l\" (UniqueName: \"kubernetes.io/projected/29996e65-8eab-4604-a8ca-cac1063478fd-kube-api-access-mz54l\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.102903 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.160176 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "29996e65-8eab-4604-a8ca-cac1063478fd" (UID: "29996e65-8eab-4604-a8ca-cac1063478fd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:14 crc kubenswrapper[4727]: I0109 11:06:14.205489 4727 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29996e65-8eab-4604-a8ca-cac1063478fd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.039267 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.046189 4727 generic.go:334] "Generic (PLEG): container finished" podID="c987342c-3221-479b-9298-cdf7c85e22cd" containerID="7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8" exitCode=0 Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.046271 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerDied","Data":"7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8"} Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.049947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdfc77c64-cjzlr" event={"ID":"29996e65-8eab-4604-a8ca-cac1063478fd","Type":"ContainerDied","Data":"7b19e08e51c2187c9b787539a3d10f06721b0c9cd5e9e0ca48804bb7f658a9cf"} Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.050032 4727 scope.go:117] "RemoveContainer" containerID="69ba3b352cf7b0752fc1cfbf712a979989983617f73c833df815dcbcc7c1d3bd" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.050378 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdfc77c64-cjzlr" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.113076 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.121849 4727 scope.go:117] "RemoveContainer" containerID="be0665d58f970931a3ea0aad99ce23b278af87c1eddb794e7675c2709c3b6717" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.129149 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.155146 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6bdfc77c64-cjzlr"] Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.272755 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.273332 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bbb58d5f8-5wxbz" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": dial tcp 10.217.0.163:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.303732 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348270 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348408 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348438 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348563 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348604 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.348659 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") pod \"c987342c-3221-479b-9298-cdf7c85e22cd\" (UID: \"c987342c-3221-479b-9298-cdf7c85e22cd\") " Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.373669 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5" (OuterVolumeSpecName: "kube-api-access-ttbs5") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "kube-api-access-ttbs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.418204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config" (OuterVolumeSpecName: "config") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.429230 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.429792 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.434044 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.437109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c987342c-3221-479b-9298-cdf7c85e22cd" (UID: "c987342c-3221-479b-9298-cdf7c85e22cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457033 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457067 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457078 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457089 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttbs5\" (UniqueName: \"kubernetes.io/projected/c987342c-3221-479b-9298-cdf7c85e22cd-kube-api-access-ttbs5\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457099 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:15 crc kubenswrapper[4727]: I0109 11:06:15.457107 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c987342c-3221-479b-9298-cdf7c85e22cd-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.063307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerStarted","Data":"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba"} Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.063761 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.065855 4727 generic.go:334] "Generic (PLEG): container finished" podID="bddc5542-122d-4606-a57a-8830398a4c93" containerID="7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265" exitCode=0 Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.065963 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerDied","Data":"7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265"} Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.068344 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.068388 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" event={"ID":"c987342c-3221-479b-9298-cdf7c85e22cd","Type":"ContainerDied","Data":"2fc1bd7230fec540cd4a334d07ebbdb4b06f434463e354143dc267a731f76be2"} Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.068473 4727 scope.go:117] "RemoveContainer" containerID="7209dd2db9d884605bbffaaa7087ae9ed9a08ae87ed60150fd61e912ce5d9fd8" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.070652 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="cinder-scheduler" containerID="cri-o://bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" gracePeriod=30 Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.071063 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="probe" containerID="cri-o://f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" gracePeriod=30 Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.101298 4727 scope.go:117] "RemoveContainer" containerID="976be790afea6d4b89ec035b128ead320d45ad49b962862d4715341f9c9e16da" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.110177 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.509784367 podStartE2EDuration="7.110155764s" podCreationTimestamp="2026-01-09 11:06:09 +0000 UTC" firstStartedPulling="2026-01-09 11:06:10.463653938 +0000 UTC m=+1215.913558719" lastFinishedPulling="2026-01-09 11:06:14.064025335 +0000 UTC m=+1219.513930116" observedRunningTime="2026-01-09 11:06:16.09260066 +0000 UTC m=+1221.542505461" watchObservedRunningTime="2026-01-09 11:06:16.110155764 +0000 UTC m=+1221.560060545" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.128190 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.130914 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-jd4fj"] Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.295614 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.873734 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" path="/var/lib/kubelet/pods/29996e65-8eab-4604-a8ca-cac1063478fd/volumes" Jan 09 11:06:16 crc kubenswrapper[4727]: I0109 11:06:16.874609 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" path="/var/lib/kubelet/pods/c987342c-3221-479b-9298-cdf7c85e22cd/volumes" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.012115 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.104566 4727 generic.go:334] "Generic (PLEG): container finished" podID="c5f4cf4a-501a-4881-b395-2740657333d5" containerID="f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" exitCode=0 Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.104639 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerDied","Data":"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c"} Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.834633 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.953734 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.953901 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.953982 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.954022 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.954055 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.954217 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") pod \"c5f4cf4a-501a-4881-b395-2740657333d5\" (UID: \"c5f4cf4a-501a-4881-b395-2740657333d5\") " Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.954936 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.960767 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v" (OuterVolumeSpecName: "kube-api-access-gd69v") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "kube-api-access-gd69v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.961771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:18 crc kubenswrapper[4727]: I0109 11:06:18.967498 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts" (OuterVolumeSpecName: "scripts") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.027235 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056460 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056529 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5f4cf4a-501a-4881-b395-2740657333d5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056543 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056555 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.056568 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd69v\" (UniqueName: \"kubernetes.io/projected/c5f4cf4a-501a-4881-b395-2740657333d5-kube-api-access-gd69v\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.076150 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data" (OuterVolumeSpecName: "config-data") pod "c5f4cf4a-501a-4881-b395-2740657333d5" (UID: "c5f4cf4a-501a-4881-b395-2740657333d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119198 4727 generic.go:334] "Generic (PLEG): container finished" podID="c5f4cf4a-501a-4881-b395-2740657333d5" containerID="bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" exitCode=0 Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119295 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerDied","Data":"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e"} Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119343 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c5f4cf4a-501a-4881-b395-2740657333d5","Type":"ContainerDied","Data":"ce83cb6536bed5f69863a9bc02f546d105aa9cecf6f79078fbb71dfb9bf0d4f6"} Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119364 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.119383 4727 scope.go:117] "RemoveContainer" containerID="f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.156973 4727 scope.go:117] "RemoveContainer" containerID="bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.166206 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5f4cf4a-501a-4881-b395-2740657333d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.209551 4727 scope.go:117] "RemoveContainer" containerID="f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.210307 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c\": container with ID starting with f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c not found: ID does not exist" containerID="f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.210352 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c"} err="failed to get container status \"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c\": rpc error: code = NotFound desc = could not find container \"f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c\": container with ID starting with f14d77650446f8e67013c98fd7f339541241d9b806aed317a9728c8ed8204c9c not found: ID does not exist" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.210403 4727 scope.go:117] "RemoveContainer" containerID="bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.211317 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e\": container with ID starting with bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e not found: ID does not exist" containerID="bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.211457 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e"} err="failed to get container status \"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e\": rpc error: code = NotFound desc = could not find container \"bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e\": container with ID starting with bb2131a7ed748220e95f22983bdb550d7023061b80cbb191e30a426f9a462d8e not found: ID does not exist" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.212585 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.229952 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.242154 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.242910 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.242943 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-api" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.242957 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.242968 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.242993 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="probe" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243002 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="probe" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243022 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="init" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243031 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="init" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243050 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243057 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243073 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="cinder-scheduler" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243085 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="cinder-scheduler" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243099 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-httpd" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243105 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-httpd" Jan 09 11:06:19 crc kubenswrapper[4727]: E0109 11:06:19.243124 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243130 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243362 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="cinder-scheduler" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243380 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243388 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-api" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243396 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" containerName="probe" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243405 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243416 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="29996e65-8eab-4604-a8ca-cac1063478fd" containerName="neutron-httpd" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.243428 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7283b7d5-d972-4c78-ac33-72488eedabf2" containerName="barbican-api-log" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.244833 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.247907 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.252326 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.369804 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfcd\" (UniqueName: \"kubernetes.io/projected/e69c5def-7abe-4486-b548-323e0416cc83-kube-api-access-6qfcd\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370283 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e69c5def-7abe-4486-b548-323e0416cc83-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370408 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370589 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.370693 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-scripts\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.472931 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e69c5def-7abe-4486-b548-323e0416cc83-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473027 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473061 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473089 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-scripts\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfcd\" (UniqueName: \"kubernetes.io/projected/e69c5def-7abe-4486-b548-323e0416cc83-kube-api-access-6qfcd\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.473145 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.474666 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e69c5def-7abe-4486-b548-323e0416cc83-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.483367 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.483629 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-config-data\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.494457 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.497960 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69c5def-7abe-4486-b548-323e0416cc83-scripts\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.498499 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfcd\" (UniqueName: \"kubernetes.io/projected/e69c5def-7abe-4486-b548-323e0416cc83-kube-api-access-6qfcd\") pod \"cinder-scheduler-0\" (UID: \"e69c5def-7abe-4486-b548-323e0416cc83\") " pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.564105 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 11:06:19 crc kubenswrapper[4727]: I0109 11:06:19.997076 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-jd4fj" podUID="c987342c-3221-479b-9298-cdf7c85e22cd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.162:5353: i/o timeout" Jan 09 11:06:20 crc kubenswrapper[4727]: I0109 11:06:20.134342 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 11:06:20 crc kubenswrapper[4727]: W0109 11:06:20.172063 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode69c5def_7abe_4486_b548_323e0416cc83.slice/crio-81126734d3ad39ac30861149d261064a7809bbe2ad4074717fb1fdd6257da297 WatchSource:0}: Error finding container 81126734d3ad39ac30861149d261064a7809bbe2ad4074717fb1fdd6257da297: Status 404 returned error can't find the container with id 81126734d3ad39ac30861149d261064a7809bbe2ad4074717fb1fdd6257da297 Jan 09 11:06:20 crc kubenswrapper[4727]: I0109 11:06:20.875371 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f4cf4a-501a-4881-b395-2740657333d5" path="/var/lib/kubelet/pods/c5f4cf4a-501a-4881-b395-2740657333d5/volumes" Jan 09 11:06:20 crc kubenswrapper[4727]: I0109 11:06:20.922618 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:06:21 crc kubenswrapper[4727]: I0109 11:06:21.043467 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85c4f6b76d-7zrx8" Jan 09 11:06:21 crc kubenswrapper[4727]: I0109 11:06:21.174490 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e69c5def-7abe-4486-b548-323e0416cc83","Type":"ContainerStarted","Data":"273137fd08b7f1df78b4a23bef04f558ece73ad6f5655c66e7f859b7ee230afb"} Jan 09 11:06:21 crc kubenswrapper[4727]: I0109 11:06:21.174544 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e69c5def-7abe-4486-b548-323e0416cc83","Type":"ContainerStarted","Data":"81126734d3ad39ac30861149d261064a7809bbe2ad4074717fb1fdd6257da297"} Jan 09 11:06:21 crc kubenswrapper[4727]: I0109 11:06:21.555104 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-666857844b-c2hp6" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.184951 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e69c5def-7abe-4486-b548-323e0416cc83","Type":"ContainerStarted","Data":"e35bc1604915387393e7d7f12e6fe1533c0eeb1d5e802af5e018550ce8db9c88"} Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.217254 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.217228555 podStartE2EDuration="3.217228555s" podCreationTimestamp="2026-01-09 11:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:22.20109193 +0000 UTC m=+1227.650996721" watchObservedRunningTime="2026-01-09 11:06:22.217228555 +0000 UTC m=+1227.667133346" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.235643 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.237209 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.239552 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.239564 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wdrq9" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.241795 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.249929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.339561 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.339797 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.339833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.339959 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.444160 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.444223 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.444371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.444462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.446854 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.453425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.454573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.464030 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") pod \"openstackclient\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.555352 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.625547 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.658411 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.696577 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.698299 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.706745 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.758088 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.758544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz852\" (UniqueName: \"kubernetes.io/projected/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-kube-api-access-kz852\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.759988 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.760163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: E0109 11:06:22.770597 4727 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 09 11:06:22 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_9144eabf-83b9-49a6-a047-b2606a68d1a7_0(bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180" Netns:"/var/run/netns/cc6d7f27-05af-4c25-a0f4-4bd76583f251" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180;K8S_POD_UID=9144eabf-83b9-49a6-a047-b2606a68d1a7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/9144eabf-83b9-49a6-a047-b2606a68d1a7]: expected pod UID "9144eabf-83b9-49a6-a047-b2606a68d1a7" but got "06c8d5e8-c424-4b08-98a2-8e89fa5a27b4" from Kube API Jan 09 11:06:22 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 09 11:06:22 crc kubenswrapper[4727]: > Jan 09 11:06:22 crc kubenswrapper[4727]: E0109 11:06:22.770683 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 09 11:06:22 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_9144eabf-83b9-49a6-a047-b2606a68d1a7_0(bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180" Netns:"/var/run/netns/cc6d7f27-05af-4c25-a0f4-4bd76583f251" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=bf73acdb7c5734d7d364ab8185bfc0a774e97b1691b3c04d945783dc40a6e180;K8S_POD_UID=9144eabf-83b9-49a6-a047-b2606a68d1a7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/9144eabf-83b9-49a6-a047-b2606a68d1a7]: expected pod UID "9144eabf-83b9-49a6-a047-b2606a68d1a7" but got "06c8d5e8-c424-4b08-98a2-8e89fa5a27b4" from Kube API Jan 09 11:06:22 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 09 11:06:22 crc kubenswrapper[4727]: > pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.861576 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.862043 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz852\" (UniqueName: \"kubernetes.io/projected/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-kube-api-access-kz852\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.862467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.862653 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.862487 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.867166 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.868332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:22 crc kubenswrapper[4727]: I0109 11:06:22.888220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz852\" (UniqueName: \"kubernetes.io/projected/06c8d5e8-c424-4b08-98a2-8e89fa5a27b4-kube-api-access-kz852\") pod \"openstackclient\" (UID: \"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4\") " pod="openstack/openstackclient" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.022877 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.218192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.235408 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.240692 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9144eabf-83b9-49a6-a047-b2606a68d1a7" podUID="06c8d5e8-c424-4b08-98a2-8e89fa5a27b4" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.378238 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") pod \"9144eabf-83b9-49a6-a047-b2606a68d1a7\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.378393 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") pod \"9144eabf-83b9-49a6-a047-b2606a68d1a7\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.378458 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") pod \"9144eabf-83b9-49a6-a047-b2606a68d1a7\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.378577 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") pod \"9144eabf-83b9-49a6-a047-b2606a68d1a7\" (UID: \"9144eabf-83b9-49a6-a047-b2606a68d1a7\") " Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.380861 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9144eabf-83b9-49a6-a047-b2606a68d1a7" (UID: "9144eabf-83b9-49a6-a047-b2606a68d1a7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.387876 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9144eabf-83b9-49a6-a047-b2606a68d1a7" (UID: "9144eabf-83b9-49a6-a047-b2606a68d1a7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.389874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9144eabf-83b9-49a6-a047-b2606a68d1a7" (UID: "9144eabf-83b9-49a6-a047-b2606a68d1a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.391000 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5" (OuterVolumeSpecName: "kube-api-access-sb9h5") pod "9144eabf-83b9-49a6-a047-b2606a68d1a7" (UID: "9144eabf-83b9-49a6-a047-b2606a68d1a7"). InnerVolumeSpecName "kube-api-access-sb9h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.481657 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.481712 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.481734 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb9h5\" (UniqueName: \"kubernetes.io/projected/9144eabf-83b9-49a6-a047-b2606a68d1a7-kube-api-access-sb9h5\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.481755 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9144eabf-83b9-49a6-a047-b2606a68d1a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:23 crc kubenswrapper[4727]: I0109 11:06:23.594872 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.227567 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4","Type":"ContainerStarted","Data":"9a59f1ab7d8270687e705ab8bfbfccd195336251e0f5de2cb74edb7519ad8495"} Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.227592 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.250251 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9144eabf-83b9-49a6-a047-b2606a68d1a7" podUID="06c8d5e8-c424-4b08-98a2-8e89fa5a27b4" Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.564549 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 09 11:06:24 crc kubenswrapper[4727]: I0109 11:06:24.877900 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9144eabf-83b9-49a6-a047-b2606a68d1a7" path="/var/lib/kubelet/pods/9144eabf-83b9-49a6-a047-b2606a68d1a7/volumes" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.868113 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-67d6487995-f424z"] Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.873144 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.876699 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.877155 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.877354 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.899478 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67d6487995-f424z"] Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.945879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-log-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.945935 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-run-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.945968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-public-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.945991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-config-data\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.947691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-internal-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.948021 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-combined-ca-bundle\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.948537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-etc-swift\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:25 crc kubenswrapper[4727]: I0109 11:06:25.949452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbhtk\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-kube-api-access-rbhtk\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053030 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-etc-swift\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053108 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbhtk\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-kube-api-access-rbhtk\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-log-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053156 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-run-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053174 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-public-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-config-data\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-internal-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053300 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-combined-ca-bundle\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.053706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-log-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.054621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-run-httpd\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.061400 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-internal-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.061491 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-config-data\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.061676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-combined-ca-bundle\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.062019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-etc-swift\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.070916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-public-tls-certs\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.072004 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbhtk\" (UniqueName: \"kubernetes.io/projected/f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb-kube-api-access-rbhtk\") pod \"swift-proxy-67d6487995-f424z\" (UID: \"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb\") " pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:26 crc kubenswrapper[4727]: I0109 11:06:26.243189 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:27 crc kubenswrapper[4727]: I0109 11:06:27.094103 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67d6487995-f424z"] Jan 09 11:06:27 crc kubenswrapper[4727]: W0109 11:06:27.110395 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6d5b74a_ef5f_4cb2_b043_e56bb3cbfcdb.slice/crio-eb0ff2579e8b8fd7e84d44491ebea57501dd04fc3ad1a6de1a1a5221aa121aef WatchSource:0}: Error finding container eb0ff2579e8b8fd7e84d44491ebea57501dd04fc3ad1a6de1a1a5221aa121aef: Status 404 returned error can't find the container with id eb0ff2579e8b8fd7e84d44491ebea57501dd04fc3ad1a6de1a1a5221aa121aef Jan 09 11:06:27 crc kubenswrapper[4727]: I0109 11:06:27.301018 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67d6487995-f424z" event={"ID":"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb","Type":"ContainerStarted","Data":"eb0ff2579e8b8fd7e84d44491ebea57501dd04fc3ad1a6de1a1a5221aa121aef"} Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.011750 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.162883 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.163132 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-log" containerID="cri-o://a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.163599 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-httpd" containerID="cri-o://a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.306723 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.307703 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-central-agent" containerID="cri-o://3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.309887 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="sg-core" containerID="cri-o://81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.309974 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-notification-agent" containerID="cri-o://f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.310058 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" containerID="cri-o://e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" gracePeriod=30 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.330165 4727 generic.go:334] "Generic (PLEG): container finished" podID="0333d9ce-e537-4702-9180-533644b70869" containerID="a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a" exitCode=143 Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.330299 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerDied","Data":"a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a"} Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.343455 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67d6487995-f424z" event={"ID":"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb","Type":"ContainerStarted","Data":"314d2449db889e5f19208d3bb30746c0b32b087176095f8faf4c4cf733675cba"} Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.343568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67d6487995-f424z" event={"ID":"f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb","Type":"ContainerStarted","Data":"e12177246f9b22e39ccd4c29aa339a7926b0fe33886539a7d6b07bcb8eb8a1f8"} Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.343695 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.377727 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-67d6487995-f424z" podStartSLOduration=3.377698028 podStartE2EDuration="3.377698028s" podCreationTimestamp="2026-01-09 11:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:28.36925202 +0000 UTC m=+1233.819156811" watchObservedRunningTime="2026-01-09 11:06:28.377698028 +0000 UTC m=+1233.827602819" Jan 09 11:06:28 crc kubenswrapper[4727]: I0109 11:06:28.413335 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": read tcp 10.217.0.2:43452->10.217.0.168:3000: read: connection reset by peer" Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.366780 4727 generic.go:334] "Generic (PLEG): container finished" podID="38361e01-9ca6-4c45-8b88-809107b70a25" containerID="e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" exitCode=0 Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.366859 4727 generic.go:334] "Generic (PLEG): container finished" podID="38361e01-9ca6-4c45-8b88-809107b70a25" containerID="81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" exitCode=2 Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.366872 4727 generic.go:334] "Generic (PLEG): container finished" podID="38361e01-9ca6-4c45-8b88-809107b70a25" containerID="3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" exitCode=0 Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.368664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba"} Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.368754 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2"} Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.368782 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.368918 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226"} Jan 09 11:06:29 crc kubenswrapper[4727]: I0109 11:06:29.874480 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 09 11:06:32 crc kubenswrapper[4727]: I0109 11:06:32.401822 4727 generic.go:334] "Generic (PLEG): container finished" podID="0333d9ce-e537-4702-9180-533644b70869" containerID="a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db" exitCode=0 Jan 09 11:06:32 crc kubenswrapper[4727]: I0109 11:06:32.401883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerDied","Data":"a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.137586 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.275563 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.283849 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.283970 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284091 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284123 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284210 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.284237 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") pod \"38361e01-9ca6-4c45-8b88-809107b70a25\" (UID: \"38361e01-9ca6-4c45-8b88-809107b70a25\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.285277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.285906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.290587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts" (OuterVolumeSpecName: "scripts") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.290596 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m" (OuterVolumeSpecName: "kube-api-access-g9s2m") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "kube-api-access-g9s2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.339049 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385590 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385683 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385733 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385807 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385863 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385890 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.385954 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"0333d9ce-e537-4702-9180-533644b70869\" (UID: \"0333d9ce-e537-4702-9180-533644b70869\") " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386465 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386485 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386495 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386522 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38361e01-9ca6-4c45-8b88-809107b70a25-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386531 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9s2m\" (UniqueName: \"kubernetes.io/projected/38361e01-9ca6-4c45-8b88-809107b70a25-kube-api-access-g9s2m\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.386997 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.387696 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs" (OuterVolumeSpecName: "logs") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.398338 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts" (OuterVolumeSpecName: "scripts") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.402615 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.416041 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj" (OuterVolumeSpecName: "kube-api-access-5x8gj") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "kube-api-access-5x8gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.422819 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.429765 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.443835 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data" (OuterVolumeSpecName: "config-data") pod "38361e01-9ca6-4c45-8b88-809107b70a25" (UID: "38361e01-9ca6-4c45-8b88-809107b70a25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461070 4727 generic.go:334] "Generic (PLEG): container finished" podID="38361e01-9ca6-4c45-8b88-809107b70a25" containerID="f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" exitCode=0 Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461170 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461250 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"38361e01-9ca6-4c45-8b88-809107b70a25","Type":"ContainerDied","Data":"c35551f5fd2325dd8ded3e2242e43e59a4eeb9e347df7aa845f106c0ffc6e15c"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.461272 4727 scope.go:117] "RemoveContainer" containerID="e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.464635 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0333d9ce-e537-4702-9180-533644b70869","Type":"ContainerDied","Data":"12521441785a6be4a96436563319f80587f9a2418f37def93d11a3deb7fe4967"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.464845 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.467830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"06c8d5e8-c424-4b08-98a2-8e89fa5a27b4","Type":"ContainerStarted","Data":"f072bbd068468554fe717389c742978c432d67269a53aad5b050c57ccce64416"} Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.468531 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.479660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data" (OuterVolumeSpecName: "config-data") pod "0333d9ce-e537-4702-9180-533644b70869" (UID: "0333d9ce-e537-4702-9180-533644b70869"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488913 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488959 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488973 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488985 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.488996 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x8gj\" (UniqueName: \"kubernetes.io/projected/0333d9ce-e537-4702-9180-533644b70869-kube-api-access-5x8gj\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489009 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489022 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0333d9ce-e537-4702-9180-533644b70869-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489033 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0333d9ce-e537-4702-9180-533644b70869-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489044 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38361e01-9ca6-4c45-8b88-809107b70a25-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.489118 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.496595 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.276491558 podStartE2EDuration="12.496562237s" podCreationTimestamp="2026-01-09 11:06:22 +0000 UTC" firstStartedPulling="2026-01-09 11:06:23.602281514 +0000 UTC m=+1229.052186305" lastFinishedPulling="2026-01-09 11:06:33.822352203 +0000 UTC m=+1239.272256984" observedRunningTime="2026-01-09 11:06:34.496476255 +0000 UTC m=+1239.946381056" watchObservedRunningTime="2026-01-09 11:06:34.496562237 +0000 UTC m=+1239.946467028" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.499538 4727 scope.go:117] "RemoveContainer" containerID="81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.527354 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.532470 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.545871 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.554715 4727 scope.go:117] "RemoveContainer" containerID="f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569151 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569664 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="sg-core" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569686 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="sg-core" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569707 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569714 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569729 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-log" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569736 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-log" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569750 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-notification-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569756 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-notification-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569767 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569772 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.569782 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-central-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569788 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-central-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569969 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="proxy-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569985 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-notification-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.569997 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-httpd" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.570007 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0333d9ce-e537-4702-9180-533644b70869" containerName="glance-log" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.570017 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="sg-core" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.570030 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" containerName="ceilometer-central-agent" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.591947 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.593241 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.593416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.596048 4727 scope.go:117] "RemoveContainer" containerID="3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.597287 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.597674 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.637174 4727 scope.go:117] "RemoveContainer" containerID="e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.638004 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba\": container with ID starting with e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba not found: ID does not exist" containerID="e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.638079 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba"} err="failed to get container status \"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba\": rpc error: code = NotFound desc = could not find container \"e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba\": container with ID starting with e1311c26889685cb89bf23aa49406adb3934171927ec0dd19737d75d889286ba not found: ID does not exist" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.638124 4727 scope.go:117] "RemoveContainer" containerID="81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.638738 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2\": container with ID starting with 81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2 not found: ID does not exist" containerID="81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.638781 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2"} err="failed to get container status \"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2\": rpc error: code = NotFound desc = could not find container \"81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2\": container with ID starting with 81bf1d69ca31605a7446f72f2ea52ff63b3174c22157e03e20fa5bb4821133c2 not found: ID does not exist" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.638818 4727 scope.go:117] "RemoveContainer" containerID="f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.639388 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a\": container with ID starting with f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a not found: ID does not exist" containerID="f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.639455 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a"} err="failed to get container status \"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a\": rpc error: code = NotFound desc = could not find container \"f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a\": container with ID starting with f9a5d6c56b42616a6b19b022facf535e1df797ad079af603d4371917df98ba0a not found: ID does not exist" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.639496 4727 scope.go:117] "RemoveContainer" containerID="3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" Jan 09 11:06:34 crc kubenswrapper[4727]: E0109 11:06:34.640008 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226\": container with ID starting with 3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226 not found: ID does not exist" containerID="3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.640043 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226"} err="failed to get container status \"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226\": rpc error: code = NotFound desc = could not find container \"3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226\": container with ID starting with 3666ff567a68848a1bcab5f9141d38c692fc104df51bda748df0e58408101226 not found: ID does not exist" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.640060 4727 scope.go:117] "RemoveContainer" containerID="a4b26311570970894698f0299d46c683f09cd959427c872f4c8ade0254f4a9db" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.677010 4727 scope.go:117] "RemoveContainer" containerID="a4559962894fdb57a28c0a6d96797f73b47554af7d936ad0a86d41891fe4c54a" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694130 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694239 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694280 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694336 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.694461 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796134 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.796442 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.797328 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.797604 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.803799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.804102 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.805459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.806501 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.812489 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.823629 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.825397 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") pod \"ceilometer-0\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " pod="openstack/ceilometer-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.839147 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.844603 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.850039 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.850284 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.892205 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0333d9ce-e537-4702-9180-533644b70869" path="/var/lib/kubelet/pods/0333d9ce-e537-4702-9180-533644b70869/volumes" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.896292 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38361e01-9ca6-4c45-8b88-809107b70a25" path="/var/lib/kubelet/pods/38361e01-9ca6-4c45-8b88-809107b70a25/volumes" Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.897332 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:34 crc kubenswrapper[4727]: I0109 11:06:34.922717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.000469 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001044 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001080 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001119 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001192 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001225 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001250 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.001284 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbhr\" (UniqueName: \"kubernetes.io/projected/992ca8ba-ec96-4dc0-9442-464cbdce8afc-kube-api-access-klbhr\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103177 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103246 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klbhr\" (UniqueName: \"kubernetes.io/projected/992ca8ba-ec96-4dc0-9442-464cbdce8afc-kube-api-access-klbhr\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103529 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103566 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.103616 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.104289 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.104593 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.105737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992ca8ba-ec96-4dc0-9442-464cbdce8afc-logs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.127095 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.127364 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.136874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klbhr\" (UniqueName: \"kubernetes.io/projected/992ca8ba-ec96-4dc0-9442-464cbdce8afc-kube-api-access-klbhr\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.138913 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.153423 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ca8ba-ec96-4dc0-9442-464cbdce8afc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.179296 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"992ca8ba-ec96-4dc0-9442-464cbdce8afc\") " pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.210579 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.421069 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.484122 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"53954f674440644c283f7c014b698cd7d333a0f6dbc5a2f2cda29cf21add04ea"} Jan 09 11:06:35 crc kubenswrapper[4727]: I0109 11:06:35.911240 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 11:06:35 crc kubenswrapper[4727]: W0109 11:06:35.915838 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod992ca8ba_ec96_4dc0_9442_464cbdce8afc.slice/crio-840cc80e8fbea759ddd676341e9ae211c4513c188b09fc605c03ca8cb678379b WatchSource:0}: Error finding container 840cc80e8fbea759ddd676341e9ae211c4513c188b09fc605c03ca8cb678379b: Status 404 returned error can't find the container with id 840cc80e8fbea759ddd676341e9ae211c4513c188b09fc605c03ca8cb678379b Jan 09 11:06:36 crc kubenswrapper[4727]: I0109 11:06:36.252982 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:36 crc kubenswrapper[4727]: I0109 11:06:36.254857 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67d6487995-f424z" Jan 09 11:06:36 crc kubenswrapper[4727]: I0109 11:06:36.536961 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"992ca8ba-ec96-4dc0-9442-464cbdce8afc","Type":"ContainerStarted","Data":"840cc80e8fbea759ddd676341e9ae211c4513c188b09fc605c03ca8cb678379b"} Jan 09 11:06:36 crc kubenswrapper[4727]: I0109 11:06:36.543083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.557766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"992ca8ba-ec96-4dc0-9442-464cbdce8afc","Type":"ContainerStarted","Data":"f88012d7a0f6c75360813ec72689391ad9f83cabb290573b047ee7a12474ac10"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.558681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"992ca8ba-ec96-4dc0-9442-464cbdce8afc","Type":"ContainerStarted","Data":"c690d8102c45dd69803e5c699761883e5c2842c846ee2fc4de64aa59112668c2"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.563309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.574211 4727 generic.go:334] "Generic (PLEG): container finished" podID="3d0f92bc-9d54-4382-b822-064c339799c4" containerID="4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c" exitCode=137 Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.574289 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerDied","Data":"4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c"} Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.601373 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.601348423 podStartE2EDuration="3.601348423s" podCreationTimestamp="2026-01-09 11:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:37.593742797 +0000 UTC m=+1243.043647588" watchObservedRunningTime="2026-01-09 11:06:37.601348423 +0000 UTC m=+1243.051253204" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.608048 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656739 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656804 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656856 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.656950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.657022 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") pod \"3d0f92bc-9d54-4382-b822-064c339799c4\" (UID: \"3d0f92bc-9d54-4382-b822-064c339799c4\") " Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.659381 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.664116 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs" (OuterVolumeSpecName: "logs") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.668704 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.668835 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2" (OuterVolumeSpecName: "kube-api-access-8kxn2") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "kube-api-access-8kxn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.671978 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts" (OuterVolumeSpecName: "scripts") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.693589 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.742917 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data" (OuterVolumeSpecName: "config-data") pod "3d0f92bc-9d54-4382-b822-064c339799c4" (UID: "3d0f92bc-9d54-4382-b822-064c339799c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762191 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762242 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762260 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0f92bc-9d54-4382-b822-064c339799c4-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762268 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762277 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d0f92bc-9d54-4382-b822-064c339799c4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762286 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kxn2\" (UniqueName: \"kubernetes.io/projected/3d0f92bc-9d54-4382-b822-064c339799c4-kube-api-access-8kxn2\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:37 crc kubenswrapper[4727]: I0109 11:06:37.762301 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0f92bc-9d54-4382-b822-064c339799c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.011665 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cbf5cf75b-vwxrh" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.011857 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.247867 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.586956 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b"} Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.589866 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d0f92bc-9d54-4382-b822-064c339799c4","Type":"ContainerDied","Data":"46e0819a2a4dd76f55beafd0dd463399c99fccea0ca8d438850be56e9391306d"} Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.589926 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.589942 4727 scope.go:117] "RemoveContainer" containerID="4ee6764b5fdc3c956db5077b68b066ba3b6cffb72aea4ec0383061698e22916c" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.628080 4727 scope.go:117] "RemoveContainer" containerID="89d95b2eb64fc4fc7cbb45d90c295c946e87a4f7e926ae47cdac1ed9399064e0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.641685 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.670116 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.685640 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: E0109 11:06:38.686231 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api-log" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.686254 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api-log" Jan 09 11:06:38 crc kubenswrapper[4727]: E0109 11:06:38.686289 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.686297 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.686535 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.686568 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" containerName="cinder-api-log" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.687945 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.696137 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.696147 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.698990 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.699698 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789004 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789103 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-scripts\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a36e4825-82aa-4263-a757-807b3c43d2fa-logs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789231 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7trj\" (UniqueName: \"kubernetes.io/projected/a36e4825-82aa-4263-a757-807b3c43d2fa-kube-api-access-d7trj\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.789389 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a36e4825-82aa-4263-a757-807b3c43d2fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.873379 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d0f92bc-9d54-4382-b822-064c339799c4" path="/var/lib/kubelet/pods/3d0f92bc-9d54-4382-b822-064c339799c4/volumes" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891003 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-scripts\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a36e4825-82aa-4263-a757-807b3c43d2fa-logs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891190 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891267 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891339 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7trj\" (UniqueName: \"kubernetes.io/projected/a36e4825-82aa-4263-a757-807b3c43d2fa-kube-api-access-d7trj\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891383 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a36e4825-82aa-4263-a757-807b3c43d2fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.891557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a36e4825-82aa-4263-a757-807b3c43d2fa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.893301 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a36e4825-82aa-4263-a757-807b3c43d2fa-logs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.900672 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.901199 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.901220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-config-data\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.903956 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-scripts\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.911039 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.918380 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36e4825-82aa-4263-a757-807b3c43d2fa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:38 crc kubenswrapper[4727]: I0109 11:06:38.919051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7trj\" (UniqueName: \"kubernetes.io/projected/a36e4825-82aa-4263-a757-807b3c43d2fa-kube-api-access-d7trj\") pod \"cinder-api-0\" (UID: \"a36e4825-82aa-4263-a757-807b3c43d2fa\") " pod="openstack/cinder-api-0" Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.022528 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.393683 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.617300 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a36e4825-82aa-4263-a757-807b3c43d2fa","Type":"ContainerStarted","Data":"c4ffd9a909de77c8047be5556f7d30d4cd94e6768fd71711bd3e102953c341b5"} Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerStarted","Data":"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b"} Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624423 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-central-agent" containerID="cri-o://3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624569 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-notification-agent" containerID="cri-o://c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624571 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="sg-core" containerID="cri-o://88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624614 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="proxy-httpd" containerID="cri-o://273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.624446 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.651765 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9270059430000002 podStartE2EDuration="5.651745246s" podCreationTimestamp="2026-01-09 11:06:34 +0000 UTC" firstStartedPulling="2026-01-09 11:06:35.42362404 +0000 UTC m=+1240.873528811" lastFinishedPulling="2026-01-09 11:06:39.148363333 +0000 UTC m=+1244.598268114" observedRunningTime="2026-01-09 11:06:39.649789414 +0000 UTC m=+1245.099694205" watchObservedRunningTime="2026-01-09 11:06:39.651745246 +0000 UTC m=+1245.101650027" Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.826946 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.827438 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-log" containerID="cri-o://4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.827694 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" containerID="cri-o://cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87" gracePeriod=30 Jan 09 11:06:39 crc kubenswrapper[4727]: I0109 11:06:39.836426 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": EOF" Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.647410 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a36e4825-82aa-4263-a757-807b3c43d2fa","Type":"ContainerStarted","Data":"812844ab9b6123b33c634cc42ae56d8c301dbf8002185d55731ae8f14f2e8c13"} Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.651147 4727 generic.go:334] "Generic (PLEG): container finished" podID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerID="4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223" exitCode=143 Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.651203 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerDied","Data":"4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223"} Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656040 4727 generic.go:334] "Generic (PLEG): container finished" podID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerID="273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" exitCode=0 Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656072 4727 generic.go:334] "Generic (PLEG): container finished" podID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerID="88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" exitCode=2 Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656087 4727 generic.go:334] "Generic (PLEG): container finished" podID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerID="c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" exitCode=0 Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656146 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b"} Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656224 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b"} Jan 09 11:06:40 crc kubenswrapper[4727]: I0109 11:06:40.656241 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28"} Jan 09 11:06:41 crc kubenswrapper[4727]: I0109 11:06:41.680987 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a36e4825-82aa-4263-a757-807b3c43d2fa","Type":"ContainerStarted","Data":"2a618da410bc97f5f757c7fc9458fd23df5d20e4be11129aefa46d8cf0c996fb"} Jan 09 11:06:41 crc kubenswrapper[4727]: I0109 11:06:41.681545 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 09 11:06:41 crc kubenswrapper[4727]: I0109 11:06:41.726856 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.726826537 podStartE2EDuration="3.726826537s" podCreationTimestamp="2026-01-09 11:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:41.707069753 +0000 UTC m=+1247.156974554" watchObservedRunningTime="2026-01-09 11:06:41.726826537 +0000 UTC m=+1247.176731318" Jan 09 11:06:42 crc kubenswrapper[4727]: I0109 11:06:42.693578 4727 generic.go:334] "Generic (PLEG): container finished" podID="bddc5542-122d-4606-a57a-8830398a4c93" containerID="d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3" exitCode=137 Jan 09 11:06:42 crc kubenswrapper[4727]: I0109 11:06:42.693679 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerDied","Data":"d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3"} Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.451902 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.532880 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.532972 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533066 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533104 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533172 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533270 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.533416 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") pod \"bddc5542-122d-4606-a57a-8830398a4c93\" (UID: \"bddc5542-122d-4606-a57a-8830398a4c93\") " Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.535660 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs" (OuterVolumeSpecName: "logs") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.552210 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.573735 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw" (OuterVolumeSpecName: "kube-api-access-xf4tw") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "kube-api-access-xf4tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.578187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.593684 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data" (OuterVolumeSpecName: "config-data") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.596277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts" (OuterVolumeSpecName: "scripts") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.611499 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "bddc5542-122d-4606-a57a-8830398a4c93" (UID: "bddc5542-122d-4606-a57a-8830398a4c93"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637200 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637262 4727 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637279 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4tw\" (UniqueName: \"kubernetes.io/projected/bddc5542-122d-4606-a57a-8830398a4c93-kube-api-access-xf4tw\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637298 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637311 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc5542-122d-4606-a57a-8830398a4c93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637323 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bddc5542-122d-4606-a57a-8830398a4c93-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.637334 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bddc5542-122d-4606-a57a-8830398a4c93-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.705913 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cbf5cf75b-vwxrh" event={"ID":"bddc5542-122d-4606-a57a-8830398a4c93","Type":"ContainerDied","Data":"f359bb60ecb5049a25ef11d10b22c031018c3de4d2dffb82f605df54479897f8"} Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.705970 4727 scope.go:117] "RemoveContainer" containerID="7ea2369776acb5605db5d13449b45cc3818eb7bf8bfb5e10499576aa7ff87265" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.706108 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cbf5cf75b-vwxrh" Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.752859 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.779640 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7cbf5cf75b-vwxrh"] Jan 09 11:06:43 crc kubenswrapper[4727]: I0109 11:06:43.916238 4727 scope.go:117] "RemoveContainer" containerID="d807b486032d47770629b7fd06969df1b9f14fb740b07ec398942cb7de97e9f3" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.306126 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461553 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461669 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461776 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461893 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.461994 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.462033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.462058 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") pod \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\" (UID: \"63e9021a-5a0b-4f42-985a-1d3f60e1356f\") " Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.462523 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.463616 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.464395 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.464420 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63e9021a-5a0b-4f42-985a-1d3f60e1356f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.470475 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts" (OuterVolumeSpecName: "scripts") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.471941 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v" (OuterVolumeSpecName: "kube-api-access-tqj2v") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "kube-api-access-tqj2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.499235 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.560148 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.566188 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.566227 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.566238 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqj2v\" (UniqueName: \"kubernetes.io/projected/63e9021a-5a0b-4f42-985a-1d3f60e1356f-kube-api-access-tqj2v\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.566249 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.621331 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data" (OuterVolumeSpecName: "config-data") pod "63e9021a-5a0b-4f42-985a-1d3f60e1356f" (UID: "63e9021a-5a0b-4f42-985a-1d3f60e1356f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.667877 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63e9021a-5a0b-4f42-985a-1d3f60e1356f-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.732950 4727 generic.go:334] "Generic (PLEG): container finished" podID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerID="3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" exitCode=0 Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.733046 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54"} Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.733092 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.733589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63e9021a-5a0b-4f42-985a-1d3f60e1356f","Type":"ContainerDied","Data":"53954f674440644c283f7c014b698cd7d333a0f6dbc5a2f2cda29cf21add04ea"} Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.733635 4727 scope.go:117] "RemoveContainer" containerID="273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.824088 4727 scope.go:117] "RemoveContainer" containerID="88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.829838 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.902199 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddc5542-122d-4606-a57a-8830398a4c93" path="/var/lib/kubelet/pods/bddc5542-122d-4606-a57a-8830398a4c93/volumes" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.902878 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.902918 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903212 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-central-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903244 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-central-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903259 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903265 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903283 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="sg-core" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903289 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="sg-core" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903302 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-notification-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903307 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-notification-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903333 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="proxy-httpd" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903339 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="proxy-httpd" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.903350 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon-log" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.903356 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon-log" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906690 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906713 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-central-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906728 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="ceilometer-notification-agent" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906741 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="sg-core" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906750 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddc5542-122d-4606-a57a-8830398a4c93" containerName="horizon-log" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.906765 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" containerName="proxy-httpd" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.908584 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.908701 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.912741 4727 scope.go:117] "RemoveContainer" containerID="c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.914467 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.914848 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.960703 4727 scope.go:117] "RemoveContainer" containerID="3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.978011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.978070 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.978110 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.979075 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.979128 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.979199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.979681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.991269 4727 scope.go:117] "RemoveContainer" containerID="273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.991991 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b\": container with ID starting with 273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b not found: ID does not exist" containerID="273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.992057 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b"} err="failed to get container status \"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b\": rpc error: code = NotFound desc = could not find container \"273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b\": container with ID starting with 273ea79f903a95ea808ccc3ab05efbbe2d7aac0b042e5621f7cb84a91537ba3b not found: ID does not exist" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.992080 4727 scope.go:117] "RemoveContainer" containerID="88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.992712 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b\": container with ID starting with 88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b not found: ID does not exist" containerID="88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.992773 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b"} err="failed to get container status \"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b\": rpc error: code = NotFound desc = could not find container \"88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b\": container with ID starting with 88f2f8f59ea3c1d3d39ac5740be27ee8f4685f99896cf78031368595d57a094b not found: ID does not exist" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.992818 4727 scope.go:117] "RemoveContainer" containerID="c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.993191 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28\": container with ID starting with c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28 not found: ID does not exist" containerID="c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.993225 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28"} err="failed to get container status \"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28\": rpc error: code = NotFound desc = could not find container \"c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28\": container with ID starting with c4c68852152656db7bdded4469cf84b82ecb7f9783fde067d924f4998db2ad28 not found: ID does not exist" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.993245 4727 scope.go:117] "RemoveContainer" containerID="3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" Jan 09 11:06:44 crc kubenswrapper[4727]: E0109 11:06:44.994831 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54\": container with ID starting with 3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54 not found: ID does not exist" containerID="3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54" Jan 09 11:06:44 crc kubenswrapper[4727]: I0109 11:06:44.994910 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54"} err="failed to get container status \"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54\": rpc error: code = NotFound desc = could not find container \"3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54\": container with ID starting with 3f2b2de153d6b0d37acb7150d94b097e7bbddbc8ff87e29b0103b2e9fd8f3a54 not found: ID does not exist" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081611 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081650 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.081724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.082567 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.082967 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.087401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.087999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.089846 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.100313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.104204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") pod \"ceilometer-0\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.211702 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.212163 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.238645 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.261418 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.273711 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.762088 4727 generic.go:334] "Generic (PLEG): container finished" podID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerID="cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87" exitCode=0 Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.762241 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerDied","Data":"cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87"} Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.769200 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.769242 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.802958 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:45 crc kubenswrapper[4727]: I0109 11:06:45.907401 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.004881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.005032 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.005585 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.005673 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006414 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006485 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006561 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006627 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") pod \"5848a983-5b79-4b20-83bf-aa831b16a3de\" (UID: \"5848a983-5b79-4b20-83bf-aa831b16a3de\") " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.006708 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs" (OuterVolumeSpecName: "logs") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.007052 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.007068 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5848a983-5b79-4b20-83bf-aa831b16a3de-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.022528 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs" (OuterVolumeSpecName: "kube-api-access-69jrs") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "kube-api-access-69jrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.025459 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.034661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts" (OuterVolumeSpecName: "scripts") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.109948 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69jrs\" (UniqueName: \"kubernetes.io/projected/5848a983-5b79-4b20-83bf-aa831b16a3de-kube-api-access-69jrs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.110059 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.110076 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.142321 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.157485 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data" (OuterVolumeSpecName: "config-data") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.186678 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5848a983-5b79-4b20-83bf-aa831b16a3de" (UID: "5848a983-5b79-4b20-83bf-aa831b16a3de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.217268 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.217322 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.217335 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848a983-5b79-4b20-83bf-aa831b16a3de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.219792 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.320281 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.778820 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6"} Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.778900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"f2bd9db006208a075f1ffda298772516cf088a891a012e3732a1779dc1575402"} Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.782006 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.782064 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5848a983-5b79-4b20-83bf-aa831b16a3de","Type":"ContainerDied","Data":"64cc505548582ff0b92efe52617ea9736e870feb1d2d85557f334e68ae42a742"} Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.782627 4727 scope.go:117] "RemoveContainer" containerID="cf72e6f6cb36666185b31ee4b4117ed00aca723f02272ca6e05ab4d6457d2f87" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.827627 4727 scope.go:117] "RemoveContainer" containerID="4fcb09a552a1ed5f35a7bc9d498f3040afa15136fb622e4edcf2d346e8edf223" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.847157 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.905850 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e9021a-5a0b-4f42-985a-1d3f60e1356f" path="/var/lib/kubelet/pods/63e9021a-5a0b-4f42-985a-1d3f60e1356f/volumes" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907138 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907182 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:46 crc kubenswrapper[4727]: E0109 11:06:46.907549 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-log" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907572 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-log" Jan 09 11:06:46 crc kubenswrapper[4727]: E0109 11:06:46.907613 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907622 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907889 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-log" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.907928 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" containerName="glance-httpd" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.909473 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.909598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.914804 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 11:06:46 crc kubenswrapper[4727]: I0109 11:06:46.914822 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037030 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037144 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mvj\" (UniqueName: \"kubernetes.io/projected/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-kube-api-access-r4mvj\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037276 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-logs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037332 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.037367 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.140786 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.140882 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.140919 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.140983 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mvj\" (UniqueName: \"kubernetes.io/projected/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-kube-api-access-r4mvj\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.141029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.141086 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.141184 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-logs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.141218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.143278 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.150882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.151935 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-logs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.164681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.173386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mvj\" (UniqueName: \"kubernetes.io/projected/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-kube-api-access-r4mvj\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.173775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.179250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.181574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.194720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a\") " pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.241149 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.804579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c"} Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.807591 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.807621 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:06:47 crc kubenswrapper[4727]: I0109 11:06:47.937973 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.386262 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.824094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a","Type":"ContainerStarted","Data":"b61826b9b4a0d9c6bb8ec12fb34ee32091915b700e3896f7c3e954de3db94207"} Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.832777 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1"} Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.895002 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5848a983-5b79-4b20-83bf-aa831b16a3de" path="/var/lib/kubelet/pods/5848a983-5b79-4b20-83bf-aa831b16a3de/volumes" Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.943377 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.944023 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:06:48 crc kubenswrapper[4727]: I0109 11:06:48.952474 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 11:06:49 crc kubenswrapper[4727]: I0109 11:06:49.849060 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a","Type":"ContainerStarted","Data":"396fbbaa7ae4a192d4bc57f3f2262d2f919b4aa24f7ce2707acdd79f7d97bcdc"} Jan 09 11:06:49 crc kubenswrapper[4727]: I0109 11:06:49.849523 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a","Type":"ContainerStarted","Data":"d6a30576bbb70208bfe01709850084dc396ed8bd963a51c56eaa24fa9b7e44d5"} Jan 09 11:06:49 crc kubenswrapper[4727]: I0109 11:06:49.881487 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.881461755 podStartE2EDuration="3.881461755s" podCreationTimestamp="2026-01-09 11:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:49.870021797 +0000 UTC m=+1255.319926598" watchObservedRunningTime="2026-01-09 11:06:49.881461755 +0000 UTC m=+1255.331366536" Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.863560 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-central-agent" containerID="cri-o://9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6" gracePeriod=30 Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.863560 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-notification-agent" containerID="cri-o://e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c" gracePeriod=30 Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.863597 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="proxy-httpd" containerID="cri-o://f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49" gracePeriod=30 Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.863611 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="sg-core" containerID="cri-o://199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1" gracePeriod=30 Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.882423 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.882466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerStarted","Data":"f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49"} Jan 09 11:06:50 crc kubenswrapper[4727]: I0109 11:06:50.894413 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.011140283 podStartE2EDuration="6.894381376s" podCreationTimestamp="2026-01-09 11:06:44 +0000 UTC" firstStartedPulling="2026-01-09 11:06:45.81779296 +0000 UTC m=+1251.267697741" lastFinishedPulling="2026-01-09 11:06:49.701034053 +0000 UTC m=+1255.150938834" observedRunningTime="2026-01-09 11:06:50.887015767 +0000 UTC m=+1256.336920568" watchObservedRunningTime="2026-01-09 11:06:50.894381376 +0000 UTC m=+1256.344286167" Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.383168 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.878660 4727 generic.go:334] "Generic (PLEG): container finished" podID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerID="f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49" exitCode=0 Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.879047 4727 generic.go:334] "Generic (PLEG): container finished" podID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerID="199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1" exitCode=2 Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.879060 4727 generic.go:334] "Generic (PLEG): container finished" podID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerID="e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c" exitCode=0 Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.878741 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49"} Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.879168 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1"} Jan 09 11:06:51 crc kubenswrapper[4727]: I0109 11:06:51.879210 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c"} Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.066800 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.068093 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.111479 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.171975 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.173445 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.179720 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.179892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.197439 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.279299 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.280858 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.281627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.281729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.281784 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.281814 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.282876 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.288905 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.298444 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.329254 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") pod \"nova-api-db-create-ljc8f\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.383263 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.386895 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.390111 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.390156 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.390239 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.390299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.394707 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.418538 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.426112 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.435373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") pod \"nova-cell0-db-create-q4g4f\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.492347 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494207 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494375 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494421 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.494451 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.496836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.497357 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.500273 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.515116 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.532891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") pod \"nova-api-911e-account-create-update-hznc7\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.596456 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.596938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.597039 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.597206 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.598038 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.617843 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") pod \"nova-cell1-db-create-qftd4\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.660246 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.698399 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.705090 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.707623 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.708440 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.709727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.709969 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.708888 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.713435 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.741798 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") pod \"nova-cell0-0b0c-account-create-update-txznh\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.815601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.815682 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.919577 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.919972 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.925053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.967830 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") pod \"nova-cell1-bf38-account-create-update-j6vxl\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:52 crc kubenswrapper[4727]: I0109 11:06:52.987743 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.043017 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.056558 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:06:53 crc kubenswrapper[4727]: W0109 11:06:53.072596 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37d27352_2f68_4ced_a541_7bbd8bf33fb1.slice/crio-d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2 WatchSource:0}: Error finding container d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2: Status 404 returned error can't find the container with id d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2 Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.162688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.317682 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.510197 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:06:53 crc kubenswrapper[4727]: W0109 11:06:53.696711 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod784df696_fe59_4d64_841e_53fa77ded98f.slice/crio-836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a WatchSource:0}: Error finding container 836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a: Status 404 returned error can't find the container with id 836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.707942 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.724523 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.909781 4727 generic.go:334] "Generic (PLEG): container finished" podID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" containerID="c3ed6956b8e31f8503a62e89b83a4ac7a7d349bbdaa2c48c86045a4720314a5c" exitCode=0 Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.909912 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-911e-account-create-update-hznc7" event={"ID":"bf2c02d0-08f3-4174-a1a1-44b6b99df774","Type":"ContainerDied","Data":"c3ed6956b8e31f8503a62e89b83a4ac7a7d349bbdaa2c48c86045a4720314a5c"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.910263 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-911e-account-create-update-hznc7" event={"ID":"bf2c02d0-08f3-4174-a1a1-44b6b99df774","Type":"ContainerStarted","Data":"f1066e9f9870d5bb306bc2e02c89bc514fc6e053f3c1e28af25514a077f171c8"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.912818 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qftd4" event={"ID":"21e56a97-f683-4290-b69b-ab92efd58b4c","Type":"ContainerStarted","Data":"e988691ee87e2cfbc967d0e1c928312ff506c1b705fdf61fd63802fa468dc6ff"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.912880 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qftd4" event={"ID":"21e56a97-f683-4290-b69b-ab92efd58b4c","Type":"ContainerStarted","Data":"aaf3c210c209a1662b5b7d70902f65a5d7e7c38eccb07edd70b1c7ba5ef156fe"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.922057 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" event={"ID":"a403535a-35d2-487c-9fab-20360257ec11","Type":"ContainerStarted","Data":"a857de2bbeebd7efd2a26ea815022fd61196dc20801f7abb2a844f81c6fc6c43"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.925186 4727 generic.go:334] "Generic (PLEG): container finished" podID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" containerID="339bcb56de0d0083e60bb9f99ee6710c9861edb4bb896039162501a9d46ed6ed" exitCode=0 Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.925382 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ljc8f" event={"ID":"37d27352-2f68-4ced-a541-7bbd8bf33fb1","Type":"ContainerDied","Data":"339bcb56de0d0083e60bb9f99ee6710c9861edb4bb896039162501a9d46ed6ed"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.925482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ljc8f" event={"ID":"37d27352-2f68-4ced-a541-7bbd8bf33fb1","Type":"ContainerStarted","Data":"d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.932270 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" event={"ID":"784df696-fe59-4d64-841e-53fa77ded98f","Type":"ContainerStarted","Data":"836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.938398 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7c40808-e98b-4a31-b057-5c5b38ed5774" containerID="f947874cac612f305507a7bdaf8471df8d3875799b74261e1f17af4a0dc3c24e" exitCode=0 Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.938462 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q4g4f" event={"ID":"b7c40808-e98b-4a31-b057-5c5b38ed5774","Type":"ContainerDied","Data":"f947874cac612f305507a7bdaf8471df8d3875799b74261e1f17af4a0dc3c24e"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.938501 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q4g4f" event={"ID":"b7c40808-e98b-4a31-b057-5c5b38ed5774","Type":"ContainerStarted","Data":"9180015d32957f45be579c8855fd0fd063dd1ef6a963785bfaa5168f3af4dae4"} Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.950665 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-qftd4" podStartSLOduration=1.9506386390000001 podStartE2EDuration="1.950638639s" podCreationTimestamp="2026-01-09 11:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:53.943934429 +0000 UTC m=+1259.393839210" watchObservedRunningTime="2026-01-09 11:06:53.950638639 +0000 UTC m=+1259.400543420" Jan 09 11:06:53 crc kubenswrapper[4727]: I0109 11:06:53.999683 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" podStartSLOduration=1.999652703 podStartE2EDuration="1.999652703s" podCreationTimestamp="2026-01-09 11:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:06:53.986140559 +0000 UTC m=+1259.436045340" watchObservedRunningTime="2026-01-09 11:06:53.999652703 +0000 UTC m=+1259.449557474" Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.952457 4727 generic.go:334] "Generic (PLEG): container finished" podID="21e56a97-f683-4290-b69b-ab92efd58b4c" containerID="e988691ee87e2cfbc967d0e1c928312ff506c1b705fdf61fd63802fa468dc6ff" exitCode=0 Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.952861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qftd4" event={"ID":"21e56a97-f683-4290-b69b-ab92efd58b4c","Type":"ContainerDied","Data":"e988691ee87e2cfbc967d0e1c928312ff506c1b705fdf61fd63802fa468dc6ff"} Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.959081 4727 generic.go:334] "Generic (PLEG): container finished" podID="a403535a-35d2-487c-9fab-20360257ec11" containerID="ddf7504037a0d74d61286b57ca98d5ca4686f34d2f909e9a72a2f12480874e58" exitCode=0 Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.959129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" event={"ID":"a403535a-35d2-487c-9fab-20360257ec11","Type":"ContainerDied","Data":"ddf7504037a0d74d61286b57ca98d5ca4686f34d2f909e9a72a2f12480874e58"} Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.971073 4727 generic.go:334] "Generic (PLEG): container finished" podID="784df696-fe59-4d64-841e-53fa77ded98f" containerID="478ae5028a10c820659c5824f58f2f2a67e0f6b5335c5e28c9b5c14e796d35bd" exitCode=0 Jan 09 11:06:54 crc kubenswrapper[4727]: I0109 11:06:54.971293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" event={"ID":"784df696-fe59-4d64-841e-53fa77ded98f","Type":"ContainerDied","Data":"478ae5028a10c820659c5824f58f2f2a67e0f6b5335c5e28c9b5c14e796d35bd"} Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.400844 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.523891 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") pod \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.523950 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") pod \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\" (UID: \"37d27352-2f68-4ced-a541-7bbd8bf33fb1\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.525178 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37d27352-2f68-4ced-a541-7bbd8bf33fb1" (UID: "37d27352-2f68-4ced-a541-7bbd8bf33fb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.531307 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv" (OuterVolumeSpecName: "kube-api-access-49twv") pod "37d27352-2f68-4ced-a541-7bbd8bf33fb1" (UID: "37d27352-2f68-4ced-a541-7bbd8bf33fb1"). InnerVolumeSpecName "kube-api-access-49twv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.605538 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.615742 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.630313 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49twv\" (UniqueName: \"kubernetes.io/projected/37d27352-2f68-4ced-a541-7bbd8bf33fb1-kube-api-access-49twv\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.630356 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37d27352-2f68-4ced-a541-7bbd8bf33fb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.731446 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") pod \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.731623 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") pod \"b7c40808-e98b-4a31-b057-5c5b38ed5774\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.731679 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") pod \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\" (UID: \"bf2c02d0-08f3-4174-a1a1-44b6b99df774\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.731802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") pod \"b7c40808-e98b-4a31-b057-5c5b38ed5774\" (UID: \"b7c40808-e98b-4a31-b057-5c5b38ed5774\") " Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.733097 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7c40808-e98b-4a31-b057-5c5b38ed5774" (UID: "b7c40808-e98b-4a31-b057-5c5b38ed5774"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.733447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf2c02d0-08f3-4174-a1a1-44b6b99df774" (UID: "bf2c02d0-08f3-4174-a1a1-44b6b99df774"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.736241 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx" (OuterVolumeSpecName: "kube-api-access-95zhx") pod "b7c40808-e98b-4a31-b057-5c5b38ed5774" (UID: "b7c40808-e98b-4a31-b057-5c5b38ed5774"). InnerVolumeSpecName "kube-api-access-95zhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.737883 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl" (OuterVolumeSpecName: "kube-api-access-rfjtl") pod "bf2c02d0-08f3-4174-a1a1-44b6b99df774" (UID: "bf2c02d0-08f3-4174-a1a1-44b6b99df774"). InnerVolumeSpecName "kube-api-access-rfjtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.833878 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95zhx\" (UniqueName: \"kubernetes.io/projected/b7c40808-e98b-4a31-b057-5c5b38ed5774-kube-api-access-95zhx\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.833916 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2c02d0-08f3-4174-a1a1-44b6b99df774-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.833927 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c40808-e98b-4a31-b057-5c5b38ed5774-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.833935 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfjtl\" (UniqueName: \"kubernetes.io/projected/bf2c02d0-08f3-4174-a1a1-44b6b99df774-kube-api-access-rfjtl\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.983258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ljc8f" event={"ID":"37d27352-2f68-4ced-a541-7bbd8bf33fb1","Type":"ContainerDied","Data":"d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2"} Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.983285 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ljc8f" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.983302 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d61b1a00076dda5f6c09dd565ac95f48df30c4fef39bb09d172703e96fa3fde2" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.985893 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q4g4f" event={"ID":"b7c40808-e98b-4a31-b057-5c5b38ed5774","Type":"ContainerDied","Data":"9180015d32957f45be579c8855fd0fd063dd1ef6a963785bfaa5168f3af4dae4"} Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.985929 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9180015d32957f45be579c8855fd0fd063dd1ef6a963785bfaa5168f3af4dae4" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.985931 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q4g4f" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.988325 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-911e-account-create-update-hznc7" event={"ID":"bf2c02d0-08f3-4174-a1a1-44b6b99df774","Type":"ContainerDied","Data":"f1066e9f9870d5bb306bc2e02c89bc514fc6e053f3c1e28af25514a077f171c8"} Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.988427 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1066e9f9870d5bb306bc2e02c89bc514fc6e053f3c1e28af25514a077f171c8" Jan 09 11:06:55 crc kubenswrapper[4727]: I0109 11:06:55.988488 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-911e-account-create-update-hznc7" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.635849 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.644200 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.650806 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") pod \"784df696-fe59-4d64-841e-53fa77ded98f\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") pod \"21e56a97-f683-4290-b69b-ab92efd58b4c\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661565 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") pod \"a403535a-35d2-487c-9fab-20360257ec11\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661655 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") pod \"a403535a-35d2-487c-9fab-20360257ec11\" (UID: \"a403535a-35d2-487c-9fab-20360257ec11\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661696 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") pod \"784df696-fe59-4d64-841e-53fa77ded98f\" (UID: \"784df696-fe59-4d64-841e-53fa77ded98f\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.661756 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") pod \"21e56a97-f683-4290-b69b-ab92efd58b4c\" (UID: \"21e56a97-f683-4290-b69b-ab92efd58b4c\") " Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.662192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "784df696-fe59-4d64-841e-53fa77ded98f" (UID: "784df696-fe59-4d64-841e-53fa77ded98f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.662380 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a403535a-35d2-487c-9fab-20360257ec11" (UID: "a403535a-35d2-487c-9fab-20360257ec11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.662423 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21e56a97-f683-4290-b69b-ab92efd58b4c" (UID: "21e56a97-f683-4290-b69b-ab92efd58b4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.662549 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784df696-fe59-4d64-841e-53fa77ded98f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.672550 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv" (OuterVolumeSpecName: "kube-api-access-5mbvv") pod "21e56a97-f683-4290-b69b-ab92efd58b4c" (UID: "21e56a97-f683-4290-b69b-ab92efd58b4c"). InnerVolumeSpecName "kube-api-access-5mbvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.674428 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5" (OuterVolumeSpecName: "kube-api-access-m2lt5") pod "784df696-fe59-4d64-841e-53fa77ded98f" (UID: "784df696-fe59-4d64-841e-53fa77ded98f"). InnerVolumeSpecName "kube-api-access-m2lt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.675839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl" (OuterVolumeSpecName: "kube-api-access-q64bl") pod "a403535a-35d2-487c-9fab-20360257ec11" (UID: "a403535a-35d2-487c-9fab-20360257ec11"). InnerVolumeSpecName "kube-api-access-q64bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765882 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a403535a-35d2-487c-9fab-20360257ec11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765917 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2lt5\" (UniqueName: \"kubernetes.io/projected/784df696-fe59-4d64-841e-53fa77ded98f-kube-api-access-m2lt5\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765932 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e56a97-f683-4290-b69b-ab92efd58b4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765944 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mbvv\" (UniqueName: \"kubernetes.io/projected/21e56a97-f683-4290-b69b-ab92efd58b4c-kube-api-access-5mbvv\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:56 crc kubenswrapper[4727]: I0109 11:06:56.765956 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q64bl\" (UniqueName: \"kubernetes.io/projected/a403535a-35d2-487c-9fab-20360257ec11-kube-api-access-q64bl\") on node \"crc\" DevicePath \"\"" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.001873 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qftd4" event={"ID":"21e56a97-f683-4290-b69b-ab92efd58b4c","Type":"ContainerDied","Data":"aaf3c210c209a1662b5b7d70902f65a5d7e7c38eccb07edd70b1c7ba5ef156fe"} Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.002325 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf3c210c209a1662b5b7d70902f65a5d7e7c38eccb07edd70b1c7ba5ef156fe" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.002404 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qftd4" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.005569 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" event={"ID":"a403535a-35d2-487c-9fab-20360257ec11","Type":"ContainerDied","Data":"a857de2bbeebd7efd2a26ea815022fd61196dc20801f7abb2a844f81c6fc6c43"} Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.005597 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a857de2bbeebd7efd2a26ea815022fd61196dc20801f7abb2a844f81c6fc6c43" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.005652 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b0c-account-create-update-txznh" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.007561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" event={"ID":"784df696-fe59-4d64-841e-53fa77ded98f","Type":"ContainerDied","Data":"836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a"} Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.007595 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="836fa93102b2de23b29bb5fc436d51af1ad1f6c979ca39a406a4f703f610d20a" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.007641 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bf38-account-create-update-j6vxl" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.241424 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.242852 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.279705 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 11:06:57 crc kubenswrapper[4727]: I0109 11:06:57.297557 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 11:06:58 crc kubenswrapper[4727]: I0109 11:06:58.017616 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 11:06:58 crc kubenswrapper[4727]: I0109 11:06:58.017678 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 11:07:00 crc kubenswrapper[4727]: I0109 11:07:00.340970 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 11:07:00 crc kubenswrapper[4727]: I0109 11:07:00.342069 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 11:07:00 crc kubenswrapper[4727]: I0109 11:07:00.358639 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.066358 4727 generic.go:334] "Generic (PLEG): container finished" podID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerID="9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6" exitCode=0 Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.066539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6"} Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.428576 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.486892 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.486995 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487090 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487190 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487334 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487396 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.487540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") pod \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\" (UID: \"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b\") " Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.488070 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.488218 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.488963 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.488985 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.496243 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp" (OuterVolumeSpecName: "kube-api-access-v58xp") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "kube-api-access-v58xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.496254 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts" (OuterVolumeSpecName: "scripts") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.525553 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.591538 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v58xp\" (UniqueName: \"kubernetes.io/projected/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-kube-api-access-v58xp\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.591703 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.591783 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.595253 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.613174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data" (OuterVolumeSpecName: "config-data") pod "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" (UID: "41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.693817 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.694234 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933412 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933883 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933905 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933920 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784df696-fe59-4d64-841e-53fa77ded98f" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933928 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="784df696-fe59-4d64-841e-53fa77ded98f" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933941 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-central-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933947 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-central-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933958 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="sg-core" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933964 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="sg-core" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933976 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="proxy-httpd" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933982 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="proxy-httpd" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.933991 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a403535a-35d2-487c-9fab-20360257ec11" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.933998 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a403535a-35d2-487c-9fab-20360257ec11" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.934008 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e56a97-f683-4290-b69b-ab92efd58b4c" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934015 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e56a97-f683-4290-b69b-ab92efd58b4c" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.934027 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-notification-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934034 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-notification-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.934041 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c40808-e98b-4a31-b057-5c5b38ed5774" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934048 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c40808-e98b-4a31-b057-5c5b38ed5774" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: E0109 11:07:02.934055 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934061 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934219 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="784df696-fe59-4d64-841e-53fa77ded98f" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934233 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a403535a-35d2-487c-9fab-20360257ec11" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934244 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" containerName="mariadb-account-create-update" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934254 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c40808-e98b-4a31-b057-5c5b38ed5774" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934267 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="sg-core" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934275 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e56a97-f683-4290-b69b-ab92efd58b4c" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934285 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-central-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934294 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="proxy-httpd" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934308 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" containerName="mariadb-database-create" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.934318 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" containerName="ceilometer-notification-agent" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.935081 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.940791 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.941288 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cm4fw" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.941319 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.954372 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.999463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:02 crc kubenswrapper[4727]: I0109 11:07:02.999556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.000054 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.000378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.081316 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b","Type":"ContainerDied","Data":"f2bd9db006208a075f1ffda298772516cf088a891a012e3732a1779dc1575402"} Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.081395 4727 scope.go:117] "RemoveContainer" containerID="f523aedf06625d0ca32c8bb9d50fd4650c3f54d95db1226d645ace3108057f49" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.081415 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.109624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.109679 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.109870 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.110018 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.115757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.121230 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.126774 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.126856 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.139373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") pod \"nova-cell0-conductor-db-sync-6d58k\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.142182 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.148130 4727 scope.go:117] "RemoveContainer" containerID="199c0045a80461e2147f8535320400fb2344a75ba3520717613416b4348d83f1" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.161608 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.193183 4727 scope.go:117] "RemoveContainer" containerID="e51427589109b9b8150f20cd3ab1751b17d68d566eb7a30ec92f2dd4c4b4a53c" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.201591 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.201789 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.206006 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.206616 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.236462 4727 scope.go:117] "RemoveContainer" containerID="9b769db61af40256d9e1a23e4935715680468a3c986cc620aec16d9382b330e6" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.262434 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317205 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317232 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317366 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317400 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.317597 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420498 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420637 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420661 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420739 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.420880 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.421518 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.425699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.430194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.432405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.437274 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.443552 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.451450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") pod \"ceilometer-0\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.538087 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:03 crc kubenswrapper[4727]: I0109 11:07:03.790007 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:07:03 crc kubenswrapper[4727]: W0109 11:07:03.794780 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88c213a7_1f1e_4866_aa20_019382b42f61.slice/crio-5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486 WatchSource:0}: Error finding container 5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486: Status 404 returned error can't find the container with id 5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486 Jan 09 11:07:04 crc kubenswrapper[4727]: W0109 11:07:04.031596 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66917b73_91de_4ad9_8454_f617b6d48075.slice/crio-276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7 WatchSource:0}: Error finding container 276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7: Status 404 returned error can't find the container with id 276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7 Jan 09 11:07:04 crc kubenswrapper[4727]: I0109 11:07:04.040763 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:04 crc kubenswrapper[4727]: I0109 11:07:04.098125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6d58k" event={"ID":"88c213a7-1f1e-4866-aa20-019382b42f61","Type":"ContainerStarted","Data":"5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486"} Jan 09 11:07:04 crc kubenswrapper[4727]: I0109 11:07:04.099156 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7"} Jan 09 11:07:04 crc kubenswrapper[4727]: I0109 11:07:04.877627 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b" path="/var/lib/kubelet/pods/41acd3e1-13a5-4dcc-a57a-df46e8f1ed1b/volumes" Jan 09 11:07:05 crc kubenswrapper[4727]: I0109 11:07:05.111858 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e"} Jan 09 11:07:06 crc kubenswrapper[4727]: I0109 11:07:06.130373 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119"} Jan 09 11:07:07 crc kubenswrapper[4727]: I0109 11:07:07.142609 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313"} Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.216575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6d58k" event={"ID":"88c213a7-1f1e-4866-aa20-019382b42f61","Type":"ContainerStarted","Data":"e676a05fb9d1c98d54b7cea14e300f90879e2096ab0fd5ac713c7a29a48935ac"} Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.221531 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerStarted","Data":"63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0"} Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.222149 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.279412 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6d58k" podStartSLOduration=3.116774675 podStartE2EDuration="10.279389027s" podCreationTimestamp="2026-01-09 11:07:02 +0000 UTC" firstStartedPulling="2026-01-09 11:07:03.801087229 +0000 UTC m=+1269.250992010" lastFinishedPulling="2026-01-09 11:07:10.963701591 +0000 UTC m=+1276.413606362" observedRunningTime="2026-01-09 11:07:12.243423955 +0000 UTC m=+1277.693328736" watchObservedRunningTime="2026-01-09 11:07:12.279389027 +0000 UTC m=+1277.729293808" Jan 09 11:07:12 crc kubenswrapper[4727]: I0109 11:07:12.280062 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.353823094 podStartE2EDuration="9.280056494s" podCreationTimestamp="2026-01-09 11:07:03 +0000 UTC" firstStartedPulling="2026-01-09 11:07:04.034598053 +0000 UTC m=+1269.484502834" lastFinishedPulling="2026-01-09 11:07:10.960831453 +0000 UTC m=+1276.410736234" observedRunningTime="2026-01-09 11:07:12.271228916 +0000 UTC m=+1277.721133737" watchObservedRunningTime="2026-01-09 11:07:12.280056494 +0000 UTC m=+1277.729961295" Jan 09 11:07:23 crc kubenswrapper[4727]: I0109 11:07:23.370582 4727 generic.go:334] "Generic (PLEG): container finished" podID="88c213a7-1f1e-4866-aa20-019382b42f61" containerID="e676a05fb9d1c98d54b7cea14e300f90879e2096ab0fd5ac713c7a29a48935ac" exitCode=0 Jan 09 11:07:23 crc kubenswrapper[4727]: I0109 11:07:23.370691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6d58k" event={"ID":"88c213a7-1f1e-4866-aa20-019382b42f61","Type":"ContainerDied","Data":"e676a05fb9d1c98d54b7cea14e300f90879e2096ab0fd5ac713c7a29a48935ac"} Jan 09 11:07:24 crc kubenswrapper[4727]: I0109 11:07:24.884156 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.046100 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") pod \"88c213a7-1f1e-4866-aa20-019382b42f61\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.046945 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") pod \"88c213a7-1f1e-4866-aa20-019382b42f61\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.047003 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") pod \"88c213a7-1f1e-4866-aa20-019382b42f61\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.047113 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") pod \"88c213a7-1f1e-4866-aa20-019382b42f61\" (UID: \"88c213a7-1f1e-4866-aa20-019382b42f61\") " Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.054417 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts" (OuterVolumeSpecName: "scripts") pod "88c213a7-1f1e-4866-aa20-019382b42f61" (UID: "88c213a7-1f1e-4866-aa20-019382b42f61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.055101 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt" (OuterVolumeSpecName: "kube-api-access-clqnt") pod "88c213a7-1f1e-4866-aa20-019382b42f61" (UID: "88c213a7-1f1e-4866-aa20-019382b42f61"). InnerVolumeSpecName "kube-api-access-clqnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.078082 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data" (OuterVolumeSpecName: "config-data") pod "88c213a7-1f1e-4866-aa20-019382b42f61" (UID: "88c213a7-1f1e-4866-aa20-019382b42f61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.085215 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88c213a7-1f1e-4866-aa20-019382b42f61" (UID: "88c213a7-1f1e-4866-aa20-019382b42f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.150753 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clqnt\" (UniqueName: \"kubernetes.io/projected/88c213a7-1f1e-4866-aa20-019382b42f61-kube-api-access-clqnt\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.150819 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.150835 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.150850 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c213a7-1f1e-4866-aa20-019382b42f61-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.402647 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6d58k" event={"ID":"88c213a7-1f1e-4866-aa20-019382b42f61","Type":"ContainerDied","Data":"5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486"} Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.402721 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d8b68dd8b709832a2b2a56465ee20d9f5c59f1ef75d1fc48111a98ea9fce486" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.402768 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6d58k" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.599562 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 11:07:25 crc kubenswrapper[4727]: E0109 11:07:25.600243 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c213a7-1f1e-4866-aa20-019382b42f61" containerName="nova-cell0-conductor-db-sync" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.600268 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c213a7-1f1e-4866-aa20-019382b42f61" containerName="nova-cell0-conductor-db-sync" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.600688 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c213a7-1f1e-4866-aa20-019382b42f61" containerName="nova-cell0-conductor-db-sync" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.601724 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.608469 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.612455 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cm4fw" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.615882 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.768435 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txzq\" (UniqueName: \"kubernetes.io/projected/3aab78e7-6f64-4c9e-bb37-f670092f06eb-kube-api-access-8txzq\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.768536 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.768618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.870406 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txzq\" (UniqueName: \"kubernetes.io/projected/3aab78e7-6f64-4c9e-bb37-f670092f06eb-kube-api-access-8txzq\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.870469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.870547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.877110 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.880794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aab78e7-6f64-4c9e-bb37-f670092f06eb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.889828 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txzq\" (UniqueName: \"kubernetes.io/projected/3aab78e7-6f64-4c9e-bb37-f670092f06eb-kube-api-access-8txzq\") pod \"nova-cell0-conductor-0\" (UID: \"3aab78e7-6f64-4c9e-bb37-f670092f06eb\") " pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:25 crc kubenswrapper[4727]: I0109 11:07:25.937231 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:26 crc kubenswrapper[4727]: I0109 11:07:26.404289 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 11:07:26 crc kubenswrapper[4727]: W0109 11:07:26.408871 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aab78e7_6f64_4c9e_bb37_f670092f06eb.slice/crio-7832dc54e611bc0db5e92444e71db0d1ef60f03c579c6839b771be84f5db394b WatchSource:0}: Error finding container 7832dc54e611bc0db5e92444e71db0d1ef60f03c579c6839b771be84f5db394b: Status 404 returned error can't find the container with id 7832dc54e611bc0db5e92444e71db0d1ef60f03c579c6839b771be84f5db394b Jan 09 11:07:27 crc kubenswrapper[4727]: I0109 11:07:27.423317 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3aab78e7-6f64-4c9e-bb37-f670092f06eb","Type":"ContainerStarted","Data":"c8fc44ca2c634b15a716c734a55cc0211e84e35a36a4795cd5371387b4d5ccd5"} Jan 09 11:07:27 crc kubenswrapper[4727]: I0109 11:07:27.423697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3aab78e7-6f64-4c9e-bb37-f670092f06eb","Type":"ContainerStarted","Data":"7832dc54e611bc0db5e92444e71db0d1ef60f03c579c6839b771be84f5db394b"} Jan 09 11:07:27 crc kubenswrapper[4727]: I0109 11:07:27.425708 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:27 crc kubenswrapper[4727]: I0109 11:07:27.444682 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.444659486 podStartE2EDuration="2.444659486s" podCreationTimestamp="2026-01-09 11:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:27.440019831 +0000 UTC m=+1292.889924632" watchObservedRunningTime="2026-01-09 11:07:27.444659486 +0000 UTC m=+1292.894564277" Jan 09 11:07:33 crc kubenswrapper[4727]: I0109 11:07:33.547299 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 11:07:35 crc kubenswrapper[4727]: I0109 11:07:35.969192 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.590393 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.592065 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.596290 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.596621 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.613963 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.716138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.716246 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.716558 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.716822 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.805056 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.806925 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.813383 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.818940 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.819085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.819129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.819225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.826495 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.830537 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.831012 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.853433 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.891056 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") pod \"nova-cell0-cell-mapping-bd2gt\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.909035 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.910678 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.917957 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.922684 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.923013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.923055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.923117 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.937261 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.938200 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.972591 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.974104 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:36 crc kubenswrapper[4727]: I0109 11:07:36.980494 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024640 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024721 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.024812 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.028871 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.039343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.087581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130119 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130306 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130352 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130468 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.130495 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.131025 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.139944 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.155466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.157242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.166199 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") pod \"nova-api-0\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.186274 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") pod \"nova-metadata-0\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.212248 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.223320 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.250235 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.252143 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.252188 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.252255 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.254564 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.258288 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.266322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.266829 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.273236 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.282727 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.284885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") pod \"nova-scheduler-0\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.295815 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.303601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.322866 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.356758 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.356843 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.356985 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357091 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357123 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357254 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357439 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.357549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.459923 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.459994 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460042 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460069 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460144 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460271 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.460321 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.461057 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.461540 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.462401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.462439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.462767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.469428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.478568 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.486440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") pod \"dnsmasq-dns-845d6d6f59-jqnl8\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.492831 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.597203 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.614826 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.620463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.781649 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.782973 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.786918 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.787203 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.821813 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.884666 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.885163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.885594 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.885747 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.897685 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.986057 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.987717 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.987778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.987847 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:37 crc kubenswrapper[4727]: I0109 11:07:37.987955 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: W0109 11:07:38.020941 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55f6c5e4_6c29_48d0_a5af_819557cc9e04.slice/crio-9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442 WatchSource:0}: Error finding container 9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442: Status 404 returned error can't find the container with id 9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442 Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.027910 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.028812 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.029919 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.031114 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.038701 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-br2nr\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.111484 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.246423 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.374601 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.634167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerStarted","Data":"3e496057afd48fb428863c25133769c9e960876cd410faca157a7658ba5d522c"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.637876 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55f6c5e4-6c29-48d0-a5af-819557cc9e04","Type":"ContainerStarted","Data":"9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.640932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerStarted","Data":"c715a92f5aa615c93db65f6e9d930c15cd9844cbd3158043d67b9b3325878e65"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.642940 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f916ebd1-61eb-489a-be7d-e2cc06b152b6","Type":"ContainerStarted","Data":"60bccc0ec47f588ad42cb564633edde3321617957b8b8fda8f4da812cc7b79ef"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.643985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd2gt" event={"ID":"10127ac2-1ffe-4ad6-b483-ff5952f88b4a","Type":"ContainerStarted","Data":"b0d29dd9f9da1aa242230e17c6109e9e60b379b92068ffedf5804d638ea36739"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.645815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerStarted","Data":"d4d95f5c2c800a4020d7d6b3b3d3edcecb93e5aeb2770089a779d7cd1b15ec07"} Jan 09 11:07:38 crc kubenswrapper[4727]: I0109 11:07:38.676361 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:07:38 crc kubenswrapper[4727]: W0109 11:07:38.680983 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc95f5eef_fff8_427b_9318_ebfcf188f0a9.slice/crio-426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975 WatchSource:0}: Error finding container 426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975: Status 404 returned error can't find the container with id 426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975 Jan 09 11:07:39 crc kubenswrapper[4727]: I0109 11:07:39.687872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-br2nr" event={"ID":"c95f5eef-fff8-427b-9318-ebfcf188f0a9","Type":"ContainerStarted","Data":"426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975"} Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.704916 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-br2nr" event={"ID":"c95f5eef-fff8-427b-9318-ebfcf188f0a9","Type":"ContainerStarted","Data":"dc066e04c47aa4447236d231652b0e4e8be0db4783c245457a692ac5259ca534"} Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.711069 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerID="72f21ea3746f823a01ff3632cf334c040301673bdb3b5a878b6260e8b9af266c" exitCode=0 Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.711147 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerDied","Data":"72f21ea3746f823a01ff3632cf334c040301673bdb3b5a878b6260e8b9af266c"} Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.713596 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd2gt" event={"ID":"10127ac2-1ffe-4ad6-b483-ff5952f88b4a","Type":"ContainerStarted","Data":"f76d88f648ab447092c643e9a74e7887bbdfb7003074d297848426f81f8aa677"} Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.726949 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-br2nr" podStartSLOduration=3.726926716 podStartE2EDuration="3.726926716s" podCreationTimestamp="2026-01-09 11:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:40.723865793 +0000 UTC m=+1306.173770584" watchObservedRunningTime="2026-01-09 11:07:40.726926716 +0000 UTC m=+1306.176831507" Jan 09 11:07:40 crc kubenswrapper[4727]: I0109 11:07:40.773778 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bd2gt" podStartSLOduration=4.773756679 podStartE2EDuration="4.773756679s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:40.766844474 +0000 UTC m=+1306.216749275" watchObservedRunningTime="2026-01-09 11:07:40.773756679 +0000 UTC m=+1306.223661470" Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.546727 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.559959 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.733847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerStarted","Data":"e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01"} Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.734213 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.736481 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.736830 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" containerID="cri-o://aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9" gracePeriod=30 Jan 09 11:07:41 crc kubenswrapper[4727]: I0109 11:07:41.768808 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" podStartSLOduration=5.768775723 podStartE2EDuration="5.768775723s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:41.758207232 +0000 UTC m=+1307.208112033" watchObservedRunningTime="2026-01-09 11:07:41.768775723 +0000 UTC m=+1307.218680524" Jan 09 11:07:42 crc kubenswrapper[4727]: I0109 11:07:42.749679 4727 generic.go:334] "Generic (PLEG): container finished" podID="26965ac2-3dab-452c-8a34-83eadab4b929" containerID="aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9" exitCode=2 Jan 09 11:07:42 crc kubenswrapper[4727]: I0109 11:07:42.749758 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26965ac2-3dab-452c-8a34-83eadab4b929","Type":"ContainerDied","Data":"aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.198061 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.199950 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-central-agent" containerID="cri-o://f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e" gracePeriod=30 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.200096 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="sg-core" containerID="cri-o://0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313" gracePeriod=30 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.200024 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="proxy-httpd" containerID="cri-o://63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0" gracePeriod=30 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.200027 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-notification-agent" containerID="cri-o://e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119" gracePeriod=30 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811444 4727 generic.go:334] "Generic (PLEG): container finished" podID="66917b73-91de-4ad9-8454-f617b6d48075" containerID="63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0" exitCode=0 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811854 4727 generic.go:334] "Generic (PLEG): container finished" podID="66917b73-91de-4ad9-8454-f617b6d48075" containerID="0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313" exitCode=2 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811864 4727 generic.go:334] "Generic (PLEG): container finished" podID="66917b73-91de-4ad9-8454-f617b6d48075" containerID="f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e" exitCode=0 Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811958 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.811969 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.814651 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"26965ac2-3dab-452c-8a34-83eadab4b929","Type":"ContainerDied","Data":"049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc"} Jan 09 11:07:44 crc kubenswrapper[4727]: I0109 11:07:44.814678 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="049c2fe8b369ef06c1fc4838465bb21e769f3c48dd57666bf8f8004d62166bdc" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.110348 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.188800 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") pod \"26965ac2-3dab-452c-8a34-83eadab4b929\" (UID: \"26965ac2-3dab-452c-8a34-83eadab4b929\") " Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.213519 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx" (OuterVolumeSpecName: "kube-api-access-zzgpx") pod "26965ac2-3dab-452c-8a34-83eadab4b929" (UID: "26965ac2-3dab-452c-8a34-83eadab4b929"). InnerVolumeSpecName "kube-api-access-zzgpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.297597 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzgpx\" (UniqueName: \"kubernetes.io/projected/26965ac2-3dab-452c-8a34-83eadab4b929-kube-api-access-zzgpx\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.825910 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f916ebd1-61eb-489a-be7d-e2cc06b152b6","Type":"ContainerStarted","Data":"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.826149 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" gracePeriod=30 Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.831033 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-log" containerID="cri-o://6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" gracePeriod=30 Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.831207 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerStarted","Data":"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.831273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerStarted","Data":"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.831351 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-metadata" containerID="cri-o://f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" gracePeriod=30 Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.840112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerStarted","Data":"8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.840179 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerStarted","Data":"82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.848788 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.850791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55f6c5e4-6c29-48d0-a5af-819557cc9e04","Type":"ContainerStarted","Data":"576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0"} Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.851277 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.42299198 podStartE2EDuration="9.851258633s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="2026-01-09 11:07:38.275040832 +0000 UTC m=+1303.724945613" lastFinishedPulling="2026-01-09 11:07:44.703307475 +0000 UTC m=+1310.153212266" observedRunningTime="2026-01-09 11:07:45.844299008 +0000 UTC m=+1311.294203789" watchObservedRunningTime="2026-01-09 11:07:45.851258633 +0000 UTC m=+1311.301163414" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.890241 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.220687663 podStartE2EDuration="9.890220119s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="2026-01-09 11:07:38.024325764 +0000 UTC m=+1303.474230545" lastFinishedPulling="2026-01-09 11:07:44.69385822 +0000 UTC m=+1310.143763001" observedRunningTime="2026-01-09 11:07:45.86542404 +0000 UTC m=+1311.315328831" watchObservedRunningTime="2026-01-09 11:07:45.890220119 +0000 UTC m=+1311.340124900" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.908576 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.128325168 podStartE2EDuration="9.908555064s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="2026-01-09 11:07:37.914250259 +0000 UTC m=+1303.364155040" lastFinishedPulling="2026-01-09 11:07:44.694480155 +0000 UTC m=+1310.144384936" observedRunningTime="2026-01-09 11:07:45.892894752 +0000 UTC m=+1311.342799553" watchObservedRunningTime="2026-01-09 11:07:45.908555064 +0000 UTC m=+1311.358459845" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.967545 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.307221391 podStartE2EDuration="9.967489186s" podCreationTimestamp="2026-01-09 11:07:36 +0000 UTC" firstStartedPulling="2026-01-09 11:07:38.031894595 +0000 UTC m=+1303.481799376" lastFinishedPulling="2026-01-09 11:07:44.69216239 +0000 UTC m=+1310.142067171" observedRunningTime="2026-01-09 11:07:45.946927077 +0000 UTC m=+1311.396831848" watchObservedRunningTime="2026-01-09 11:07:45.967489186 +0000 UTC m=+1311.417393967" Jan 09 11:07:45 crc kubenswrapper[4727]: I0109 11:07:45.991249 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.013673 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.023809 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: E0109 11:07:46.024468 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.024504 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.024799 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.025946 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.033726 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.055797 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.055988 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.121475 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.121547 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.121575 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.122193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzxc\" (UniqueName: \"kubernetes.io/projected/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-api-access-djzxc\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.224205 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.224277 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.224310 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.224419 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzxc\" (UniqueName: \"kubernetes.io/projected/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-api-access-djzxc\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.238188 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.240241 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.244559 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.251726 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzxc\" (UniqueName: \"kubernetes.io/projected/bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3-kube-api-access-djzxc\") pod \"kube-state-metrics-0\" (UID: \"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3\") " pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.399390 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.799555 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.845430 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") pod \"6cee5e1e-cd9a-4400-ab94-66383369a072\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.845589 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") pod \"6cee5e1e-cd9a-4400-ab94-66383369a072\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.845697 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") pod \"6cee5e1e-cd9a-4400-ab94-66383369a072\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.845845 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") pod \"6cee5e1e-cd9a-4400-ab94-66383369a072\" (UID: \"6cee5e1e-cd9a-4400-ab94-66383369a072\") " Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.847686 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs" (OuterVolumeSpecName: "logs") pod "6cee5e1e-cd9a-4400-ab94-66383369a072" (UID: "6cee5e1e-cd9a-4400-ab94-66383369a072"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.854839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh" (OuterVolumeSpecName: "kube-api-access-5l5zh") pod "6cee5e1e-cd9a-4400-ab94-66383369a072" (UID: "6cee5e1e-cd9a-4400-ab94-66383369a072"). InnerVolumeSpecName "kube-api-access-5l5zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.877629 4727 generic.go:334] "Generic (PLEG): container finished" podID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" exitCode=0 Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.877681 4727 generic.go:334] "Generic (PLEG): container finished" podID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" exitCode=143 Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.878177 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.887882 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" path="/var/lib/kubelet/pods/26965ac2-3dab-452c-8a34-83eadab4b929/volumes" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.891771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data" (OuterVolumeSpecName: "config-data") pod "6cee5e1e-cd9a-4400-ab94-66383369a072" (UID: "6cee5e1e-cd9a-4400-ab94-66383369a072"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.913076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cee5e1e-cd9a-4400-ab94-66383369a072" (UID: "6cee5e1e-cd9a-4400-ab94-66383369a072"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.950307 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cee5e1e-cd9a-4400-ab94-66383369a072-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.950336 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.950346 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l5zh\" (UniqueName: \"kubernetes.io/projected/6cee5e1e-cd9a-4400-ab94-66383369a072-kube-api-access-5l5zh\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.950357 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cee5e1e-cd9a-4400-ab94-66383369a072-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970679 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerDied","Data":"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec"} Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970768 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerDied","Data":"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70"} Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970781 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cee5e1e-cd9a-4400-ab94-66383369a072","Type":"ContainerDied","Data":"d4d95f5c2c800a4020d7d6b3b3d3edcecb93e5aeb2770089a779d7cd1b15ec07"} Jan 09 11:07:46 crc kubenswrapper[4727]: I0109 11:07:46.970803 4727 scope.go:117] "RemoveContainer" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.000465 4727 scope.go:117] "RemoveContainer" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.027367 4727 scope.go:117] "RemoveContainer" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" Jan 09 11:07:47 crc kubenswrapper[4727]: E0109 11:07:47.027976 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": container with ID starting with f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec not found: ID does not exist" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.028121 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec"} err="failed to get container status \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": rpc error: code = NotFound desc = could not find container \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": container with ID starting with f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec not found: ID does not exist" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.028228 4727 scope.go:117] "RemoveContainer" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" Jan 09 11:07:47 crc kubenswrapper[4727]: E0109 11:07:47.028730 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": container with ID starting with 6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70 not found: ID does not exist" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.028798 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70"} err="failed to get container status \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": rpc error: code = NotFound desc = could not find container \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": container with ID starting with 6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70 not found: ID does not exist" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.028828 4727 scope.go:117] "RemoveContainer" containerID="f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.029137 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec"} err="failed to get container status \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": rpc error: code = NotFound desc = could not find container \"f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec\": container with ID starting with f8d6466353e7f36a68ca3844b1f82e9991df0416e7a86fa447d1a9490b9c5eec not found: ID does not exist" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.029185 4727 scope.go:117] "RemoveContainer" containerID="6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.029556 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70"} err="failed to get container status \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": rpc error: code = NotFound desc = could not find container \"6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70\": container with ID starting with 6159d88ee01a18f363466f87514e7fd29edb3d6b25eb41331ad6a80cf706fd70 not found: ID does not exist" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.228449 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.240350 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.252229 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:47 crc kubenswrapper[4727]: E0109 11:07:47.252805 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-log" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.252827 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-log" Jan 09 11:07:47 crc kubenswrapper[4727]: E0109 11:07:47.252869 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-metadata" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.252877 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-metadata" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.253103 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-metadata" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.253135 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" containerName="nova-metadata-log" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.254497 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.257745 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.257930 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.259637 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.279200 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.279260 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.325025 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.325087 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.367945 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.368188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.368290 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.368340 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.368578 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.379070 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.471501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.471911 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.472011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.472096 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.472211 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.472824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.479081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.479891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.489405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.509068 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") pod \"nova-metadata-0\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.599836 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.612139 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.623792 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.703346 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.705471 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="dnsmasq-dns" containerID="cri-o://8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713" gracePeriod=10 Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.963095 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3","Type":"ContainerStarted","Data":"a1a595cd42ec25bc504f905c30aa4de5558da8923298ed0b31ef2be0485bd6fb"} Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.963172 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3","Type":"ContainerStarted","Data":"d69c62c765598f296fde9fd0b9f0147883a6b052add7173c2feb6b3857d29099"} Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.963375 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.983888 4727 generic.go:334] "Generic (PLEG): container finished" podID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerID="8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713" exitCode=0 Jan 09 11:07:47 crc kubenswrapper[4727]: I0109 11:07:47.985312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerDied","Data":"8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713"} Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.047360 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.082610 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.682129419 podStartE2EDuration="3.079440811s" podCreationTimestamp="2026-01-09 11:07:45 +0000 UTC" firstStartedPulling="2026-01-09 11:07:46.97348779 +0000 UTC m=+1312.423392581" lastFinishedPulling="2026-01-09 11:07:47.370799192 +0000 UTC m=+1312.820703973" observedRunningTime="2026-01-09 11:07:47.995110637 +0000 UTC m=+1313.445015418" watchObservedRunningTime="2026-01-09 11:07:48.079440811 +0000 UTC m=+1313.529345612" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.365794 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="26965ac2-3dab-452c-8a34-83eadab4b929" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.366252 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.366292 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.404399 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.510994 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.512820 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.512861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.512946 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.513158 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.513193 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") pod \"b50668e7-e061-453a-bfcb-09cd1392aa57\" (UID: \"b50668e7-e061-453a-bfcb-09cd1392aa57\") " Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.519323 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr" (OuterVolumeSpecName: "kube-api-access-92mtr") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "kube-api-access-92mtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: W0109 11:07:48.562574 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28f5264c_a972_499a_adff_2ee6089e9370.slice/crio-27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986 WatchSource:0}: Error finding container 27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986: Status 404 returned error can't find the container with id 27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986 Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.583486 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.589539 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.604040 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.619628 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92mtr\" (UniqueName: \"kubernetes.io/projected/b50668e7-e061-453a-bfcb-09cd1392aa57-kube-api-access-92mtr\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.620044 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.620128 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.629393 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.636465 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config" (OuterVolumeSpecName: "config") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.683991 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b50668e7-e061-453a-bfcb-09cd1392aa57" (UID: "b50668e7-e061-453a-bfcb-09cd1392aa57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.723810 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.723862 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.723875 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b50668e7-e061-453a-bfcb-09cd1392aa57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:48 crc kubenswrapper[4727]: I0109 11:07:48.887406 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cee5e1e-cd9a-4400-ab94-66383369a072" path="/var/lib/kubelet/pods/6cee5e1e-cd9a-4400-ab94-66383369a072/volumes" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.013032 4727 generic.go:334] "Generic (PLEG): container finished" podID="66917b73-91de-4ad9-8454-f617b6d48075" containerID="e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119" exitCode=0 Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.013094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119"} Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.036815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerStarted","Data":"27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986"} Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.053215 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.053703 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q44wc" event={"ID":"b50668e7-e061-453a-bfcb-09cd1392aa57","Type":"ContainerDied","Data":"1fc9e9988fd4856268dac8faebd8ec23ba321d236e5bf07d0594fdfe44867d1e"} Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.053736 4727 scope.go:117] "RemoveContainer" containerID="8627533c145497b22847b1f7ceb1e62eb632dccd6e25eaa5ae45635f555e4713" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.123701 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.125991 4727 scope.go:117] "RemoveContainer" containerID="40bb9476bfc07b9354c89f5cbef3057e68cde163c53908f4d6837e2be7ee3f19" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.135210 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q44wc"] Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.200696 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339375 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339473 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339652 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339716 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339752 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339869 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.339902 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") pod \"66917b73-91de-4ad9-8454-f617b6d48075\" (UID: \"66917b73-91de-4ad9-8454-f617b6d48075\") " Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.347006 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.347216 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.351908 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts" (OuterVolumeSpecName: "scripts") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.361611 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn" (OuterVolumeSpecName: "kube-api-access-khkrn") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "kube-api-access-khkrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.388966 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443524 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443561 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443571 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khkrn\" (UniqueName: \"kubernetes.io/projected/66917b73-91de-4ad9-8454-f617b6d48075-kube-api-access-khkrn\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443584 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.443595 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66917b73-91de-4ad9-8454-f617b6d48075-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.453732 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.501227 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data" (OuterVolumeSpecName: "config-data") pod "66917b73-91de-4ad9-8454-f617b6d48075" (UID: "66917b73-91de-4ad9-8454-f617b6d48075"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.545867 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:49 crc kubenswrapper[4727]: I0109 11:07:49.545922 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66917b73-91de-4ad9-8454-f617b6d48075-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.065716 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerStarted","Data":"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d"} Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.066125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerStarted","Data":"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c"} Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.070905 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66917b73-91de-4ad9-8454-f617b6d48075","Type":"ContainerDied","Data":"276adbde0469af09eb2c3e9e723052e9a9fa7e90456a8c709e4adf582d54bbc7"} Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.070990 4727 scope.go:117] "RemoveContainer" containerID="63736aa4a884254b145d396a1c00dec1e39d8c339392e16843261eca9d0284f0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.071257 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.092410 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.092367633 podStartE2EDuration="3.092367633s" podCreationTimestamp="2026-01-09 11:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:50.090064358 +0000 UTC m=+1315.539969139" watchObservedRunningTime="2026-01-09 11:07:50.092367633 +0000 UTC m=+1315.542272424" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.116808 4727 scope.go:117] "RemoveContainer" containerID="0669a570d054b2222a3b0953a556ad6c9af1c507831ff19d4d2502591dc97313" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.148096 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.151220 4727 scope.go:117] "RemoveContainer" containerID="e646f08eff4fd9a8496a84ff766fd4adffd9c9f8c38a855d53f5ff2fa95e4119" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.174038 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.208948 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.213988 4727 scope.go:117] "RemoveContainer" containerID="f88250052d399058e544c079ea25d993f7764452235a3b7bdbb6ffdc528c4d1e" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226128 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-notification-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226161 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-notification-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226180 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="sg-core" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226187 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="sg-core" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226202 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="proxy-httpd" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226208 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="proxy-httpd" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226230 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-central-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226236 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-central-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226268 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="dnsmasq-dns" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226277 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="dnsmasq-dns" Jan 09 11:07:50 crc kubenswrapper[4727]: E0109 11:07:50.226294 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="init" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226302 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="init" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226841 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" containerName="dnsmasq-dns" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226886 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="sg-core" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226911 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="proxy-httpd" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.226930 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-notification-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.227005 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="66917b73-91de-4ad9-8454-f617b6d48075" containerName="ceilometer-central-agent" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.231960 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.236147 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.236715 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.237060 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.262407 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.283158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.284681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.284718 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.287363 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.289012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.289057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.289086 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.289208 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.391486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393165 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393463 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393577 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393665 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.393804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.394403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.394746 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.398919 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.399650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.401802 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.402248 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.414673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.415425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") pod \"ceilometer-0\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.565875 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.879388 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66917b73-91de-4ad9-8454-f617b6d48075" path="/var/lib/kubelet/pods/66917b73-91de-4ad9-8454-f617b6d48075/volumes" Jan 09 11:07:50 crc kubenswrapper[4727]: I0109 11:07:50.881766 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b50668e7-e061-453a-bfcb-09cd1392aa57" path="/var/lib/kubelet/pods/b50668e7-e061-453a-bfcb-09cd1392aa57/volumes" Jan 09 11:07:51 crc kubenswrapper[4727]: I0109 11:07:51.089279 4727 generic.go:334] "Generic (PLEG): container finished" podID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" containerID="f76d88f648ab447092c643e9a74e7887bbdfb7003074d297848426f81f8aa677" exitCode=0 Jan 09 11:07:51 crc kubenswrapper[4727]: I0109 11:07:51.089400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd2gt" event={"ID":"10127ac2-1ffe-4ad6-b483-ff5952f88b4a","Type":"ContainerDied","Data":"f76d88f648ab447092c643e9a74e7887bbdfb7003074d297848426f81f8aa677"} Jan 09 11:07:51 crc kubenswrapper[4727]: I0109 11:07:51.109271 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.103648 4727 generic.go:334] "Generic (PLEG): container finished" podID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" containerID="dc066e04c47aa4447236d231652b0e4e8be0db4783c245457a692ac5259ca534" exitCode=0 Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.103766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-br2nr" event={"ID":"c95f5eef-fff8-427b-9318-ebfcf188f0a9","Type":"ContainerDied","Data":"dc066e04c47aa4447236d231652b0e4e8be0db4783c245457a692ac5259ca534"} Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.107661 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca"} Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.107831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"a325755858225e11102c3b57ad31be80d35da46e13778310a2800ddb5d42db62"} Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.613244 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.613862 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.620399 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.755162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") pod \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.755303 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") pod \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.755355 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") pod \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.755379 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") pod \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\" (UID: \"10127ac2-1ffe-4ad6-b483-ff5952f88b4a\") " Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.765884 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr" (OuterVolumeSpecName: "kube-api-access-h2hwr") pod "10127ac2-1ffe-4ad6-b483-ff5952f88b4a" (UID: "10127ac2-1ffe-4ad6-b483-ff5952f88b4a"). InnerVolumeSpecName "kube-api-access-h2hwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.773018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts" (OuterVolumeSpecName: "scripts") pod "10127ac2-1ffe-4ad6-b483-ff5952f88b4a" (UID: "10127ac2-1ffe-4ad6-b483-ff5952f88b4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.793071 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10127ac2-1ffe-4ad6-b483-ff5952f88b4a" (UID: "10127ac2-1ffe-4ad6-b483-ff5952f88b4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.799674 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data" (OuterVolumeSpecName: "config-data") pod "10127ac2-1ffe-4ad6-b483-ff5952f88b4a" (UID: "10127ac2-1ffe-4ad6-b483-ff5952f88b4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.858393 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.858451 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2hwr\" (UniqueName: \"kubernetes.io/projected/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-kube-api-access-h2hwr\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.858468 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:52 crc kubenswrapper[4727]: I0109 11:07:52.858481 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10127ac2-1ffe-4ad6-b483-ff5952f88b4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.119489 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247"} Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.122963 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bd2gt" event={"ID":"10127ac2-1ffe-4ad6-b483-ff5952f88b4a","Type":"ContainerDied","Data":"b0d29dd9f9da1aa242230e17c6109e9e60b379b92068ffedf5804d638ea36739"} Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.122997 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0d29dd9f9da1aa242230e17c6109e9e60b379b92068ffedf5804d638ea36739" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.123009 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bd2gt" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.321893 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.322227 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" containerID="cri-o://82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.322675 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" containerID="cri-o://8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.352286 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.352528 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerName="nova-scheduler-scheduler" containerID="cri-o://576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.440656 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.440985 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-log" containerID="cri-o://a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.441273 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-metadata" containerID="cri-o://d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" gracePeriod=30 Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.577403 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.700622 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") pod \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.700721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") pod \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.700851 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") pod \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.700881 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") pod \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\" (UID: \"c95f5eef-fff8-427b-9318-ebfcf188f0a9\") " Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.709733 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts" (OuterVolumeSpecName: "scripts") pod "c95f5eef-fff8-427b-9318-ebfcf188f0a9" (UID: "c95f5eef-fff8-427b-9318-ebfcf188f0a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.715759 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8" (OuterVolumeSpecName: "kube-api-access-98zx8") pod "c95f5eef-fff8-427b-9318-ebfcf188f0a9" (UID: "c95f5eef-fff8-427b-9318-ebfcf188f0a9"). InnerVolumeSpecName "kube-api-access-98zx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.753997 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data" (OuterVolumeSpecName: "config-data") pod "c95f5eef-fff8-427b-9318-ebfcf188f0a9" (UID: "c95f5eef-fff8-427b-9318-ebfcf188f0a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.761885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c95f5eef-fff8-427b-9318-ebfcf188f0a9" (UID: "c95f5eef-fff8-427b-9318-ebfcf188f0a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.806216 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98zx8\" (UniqueName: \"kubernetes.io/projected/c95f5eef-fff8-427b-9318-ebfcf188f0a9-kube-api-access-98zx8\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.806274 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.806291 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:53 crc kubenswrapper[4727]: I0109 11:07:53.806305 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95f5eef-fff8-427b-9318-ebfcf188f0a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.138897 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.141422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.149897 4727 generic.go:334] "Generic (PLEG): container finished" podID="28f5264c-a972-499a-adff-2ee6089e9370" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" exitCode=0 Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.149933 4727 generic.go:334] "Generic (PLEG): container finished" podID="28f5264c-a972-499a-adff-2ee6089e9370" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" exitCode=143 Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.149985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerDied","Data":"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.150022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerDied","Data":"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.150032 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"28f5264c-a972-499a-adff-2ee6089e9370","Type":"ContainerDied","Data":"27d0b2da9e51585e35a6fe7faac2801fe559aacf72ed846a3f09a6c2dc25a986"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.150050 4727 scope.go:117] "RemoveContainer" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.150197 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.156840 4727 generic.go:334] "Generic (PLEG): container finished" podID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerID="82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e" exitCode=143 Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.156934 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerDied","Data":"82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.164538 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-br2nr" event={"ID":"c95f5eef-fff8-427b-9318-ebfcf188f0a9","Type":"ContainerDied","Data":"426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975"} Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.164638 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="426d228cc1898052b5240e6866e240e2e3026960aedc7f72c6ec1fb2cb279975" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.164681 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-br2nr" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.218693 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.218852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.218976 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.219017 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.219094 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") pod \"28f5264c-a972-499a-adff-2ee6089e9370\" (UID: \"28f5264c-a972-499a-adff-2ee6089e9370\") " Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.224721 4727 scope.go:117] "RemoveContainer" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.227200 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs" (OuterVolumeSpecName: "logs") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.258829 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g" (OuterVolumeSpecName: "kube-api-access-vwl7g") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "kube-api-access-vwl7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.292084 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.326602 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.327308 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" containerName="nova-manage" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327329 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" containerName="nova-manage" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.327345 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-metadata" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327352 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-metadata" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.327370 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-log" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327376 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-log" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.327388 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" containerName="nova-cell1-conductor-db-sync" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327424 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" containerName="nova-cell1-conductor-db-sync" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327733 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-metadata" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327763 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" containerName="nova-manage" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327773 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f5264c-a972-499a-adff-2ee6089e9370" containerName="nova-metadata-log" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.327788 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" containerName="nova-cell1-conductor-db-sync" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.328886 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.334840 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwl7g\" (UniqueName: \"kubernetes.io/projected/28f5264c-a972-499a-adff-2ee6089e9370-kube-api-access-vwl7g\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.334875 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.334886 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f5264c-a972-499a-adff-2ee6089e9370-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.337644 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.337851 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data" (OuterVolumeSpecName: "config-data") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.353587 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.364007 4727 scope.go:117] "RemoveContainer" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.364913 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": container with ID starting with d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c not found: ID does not exist" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.364953 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c"} err="failed to get container status \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": rpc error: code = NotFound desc = could not find container \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": container with ID starting with d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c not found: ID does not exist" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.364985 4727 scope.go:117] "RemoveContainer" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" Jan 09 11:07:54 crc kubenswrapper[4727]: E0109 11:07:54.367071 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": container with ID starting with a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d not found: ID does not exist" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.367180 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d"} err="failed to get container status \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": rpc error: code = NotFound desc = could not find container \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": container with ID starting with a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d not found: ID does not exist" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.367263 4727 scope.go:117] "RemoveContainer" containerID="d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.375525 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c"} err="failed to get container status \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": rpc error: code = NotFound desc = could not find container \"d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c\": container with ID starting with d23eec60a2a35f729369f22a74b6cab19a9b828ea65bd39b4399c819252f302c not found: ID does not exist" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.375615 4727 scope.go:117] "RemoveContainer" containerID="a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.376068 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d"} err="failed to get container status \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": rpc error: code = NotFound desc = could not find container \"a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d\": container with ID starting with a3f61e607fbdd77958092654312b9c769336b6bd89857571a33bbfa287d8a46d not found: ID does not exist" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.387630 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "28f5264c-a972-499a-adff-2ee6089e9370" (UID: "28f5264c-a972-499a-adff-2ee6089e9370"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.436840 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.436970 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mns\" (UniqueName: \"kubernetes.io/projected/6a601271-3d79-4446-bc6f-81b4490541f4-kube-api-access-48mns\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.437069 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.437132 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.437146 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f5264c-a972-499a-adff-2ee6089e9370-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.535288 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.538897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.539010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48mns\" (UniqueName: \"kubernetes.io/projected/6a601271-3d79-4446-bc6f-81b4490541f4-kube-api-access-48mns\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.539122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.546140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.546183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a601271-3d79-4446-bc6f-81b4490541f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.555594 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.568143 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48mns\" (UniqueName: \"kubernetes.io/projected/6a601271-3d79-4446-bc6f-81b4490541f4-kube-api-access-48mns\") pod \"nova-cell1-conductor-0\" (UID: \"6a601271-3d79-4446-bc6f-81b4490541f4\") " pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.575186 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.577541 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.582692 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.583396 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.610328 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643303 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.643381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.677707 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747132 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747186 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747358 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747397 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.747416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.748874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.754050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.755009 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.755837 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.771220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") pod \"nova-metadata-0\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " pod="openstack/nova-metadata-0" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.899601 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f5264c-a972-499a-adff-2ee6089e9370" path="/var/lib/kubelet/pods/28f5264c-a972-499a-adff-2ee6089e9370/volumes" Jan 09 11:07:54 crc kubenswrapper[4727]: I0109 11:07:54.906504 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.181452 4727 generic.go:334] "Generic (PLEG): container finished" podID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerID="576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0" exitCode=0 Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.181533 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55f6c5e4-6c29-48d0-a5af-819557cc9e04","Type":"ContainerDied","Data":"576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0"} Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.183211 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55f6c5e4-6c29-48d0-a5af-819557cc9e04","Type":"ContainerDied","Data":"9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442"} Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.183236 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd9b61c9ed58b30f7218593852eee2cb2e587918784e2ed76672fb257177442" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.206973 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.281947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") pod \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.282154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") pod \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.282243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") pod \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\" (UID: \"55f6c5e4-6c29-48d0-a5af-819557cc9e04\") " Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.315064 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m" (OuterVolumeSpecName: "kube-api-access-2ld4m") pod "55f6c5e4-6c29-48d0-a5af-819557cc9e04" (UID: "55f6c5e4-6c29-48d0-a5af-819557cc9e04"). InnerVolumeSpecName "kube-api-access-2ld4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.363800 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data" (OuterVolumeSpecName: "config-data") pod "55f6c5e4-6c29-48d0-a5af-819557cc9e04" (UID: "55f6c5e4-6c29-48d0-a5af-819557cc9e04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.366686 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.378661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55f6c5e4-6c29-48d0-a5af-819557cc9e04" (UID: "55f6c5e4-6c29-48d0-a5af-819557cc9e04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.389883 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.389933 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f6c5e4-6c29-48d0-a5af-819557cc9e04-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.389982 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ld4m\" (UniqueName: \"kubernetes.io/projected/55f6c5e4-6c29-48d0-a5af-819557cc9e04-kube-api-access-2ld4m\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:55 crc kubenswrapper[4727]: W0109 11:07:55.557268 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ddc88_eab5_4564_a55d_aafb1d7084d2.slice/crio-2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91 WatchSource:0}: Error finding container 2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91: Status 404 returned error can't find the container with id 2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91 Jan 09 11:07:55 crc kubenswrapper[4727]: I0109 11:07:55.559156 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.196939 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerStarted","Data":"b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.197487 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.200714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerStarted","Data":"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.200757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerStarted","Data":"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.200770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerStarted","Data":"2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.203218 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.203224 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6a601271-3d79-4446-bc6f-81b4490541f4","Type":"ContainerStarted","Data":"d66d8ffc67819ad65b13ef623326e2acb7c14ec7bfa47782b057c5ef182ee5db"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.203289 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6a601271-3d79-4446-bc6f-81b4490541f4","Type":"ContainerStarted","Data":"4d93b4626d4b85fb5061cc047d0978aa660860c3759658cc2b8ddb401094f551"} Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.227597 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.293639152 podStartE2EDuration="6.227578813s" podCreationTimestamp="2026-01-09 11:07:50 +0000 UTC" firstStartedPulling="2026-01-09 11:07:51.124338946 +0000 UTC m=+1316.574243727" lastFinishedPulling="2026-01-09 11:07:55.058278607 +0000 UTC m=+1320.508183388" observedRunningTime="2026-01-09 11:07:56.222604745 +0000 UTC m=+1321.672509526" watchObservedRunningTime="2026-01-09 11:07:56.227578813 +0000 UTC m=+1321.677483594" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.260876 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.270412 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.287888 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: E0109 11:07:56.288777 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerName="nova-scheduler-scheduler" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.288903 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerName="nova-scheduler-scheduler" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.289259 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" containerName="nova-scheduler-scheduler" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.290150 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.293925 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.297004 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.296954972 podStartE2EDuration="2.296954972s" podCreationTimestamp="2026-01-09 11:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:56.276325652 +0000 UTC m=+1321.726230423" watchObservedRunningTime="2026-01-09 11:07:56.296954972 +0000 UTC m=+1321.746859753" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.311382 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.311455 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.311555 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.311729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.383328 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.383295154 podStartE2EDuration="2.383295154s" podCreationTimestamp="2026-01-09 11:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:56.3280282 +0000 UTC m=+1321.777932981" watchObservedRunningTime="2026-01-09 11:07:56.383295154 +0000 UTC m=+1321.833199935" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.414238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.414324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.414462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.433638 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.437243 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.443724 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.462898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.642110 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:07:56 crc kubenswrapper[4727]: I0109 11:07:56.892449 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f6c5e4-6c29-48d0-a5af-819557cc9e04" path="/var/lib/kubelet/pods/55f6c5e4-6c29-48d0-a5af-819557cc9e04/volumes" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.226441 4727 generic.go:334] "Generic (PLEG): container finished" podID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerID="8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf" exitCode=0 Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.227557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerDied","Data":"8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf"} Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.228022 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.307364 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.615019 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.672988 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") pod \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.675732 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") pod \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.675934 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") pod \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.676022 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") pod \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\" (UID: \"2e3d825a-0b57-4562-9a27-b985dc3ddc38\") " Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.679274 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs" (OuterVolumeSpecName: "logs") pod "2e3d825a-0b57-4562-9a27-b985dc3ddc38" (UID: "2e3d825a-0b57-4562-9a27-b985dc3ddc38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.686670 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx" (OuterVolumeSpecName: "kube-api-access-r5nkx") pod "2e3d825a-0b57-4562-9a27-b985dc3ddc38" (UID: "2e3d825a-0b57-4562-9a27-b985dc3ddc38"). InnerVolumeSpecName "kube-api-access-r5nkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.791800 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5nkx\" (UniqueName: \"kubernetes.io/projected/2e3d825a-0b57-4562-9a27-b985dc3ddc38-kube-api-access-r5nkx\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.791848 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3d825a-0b57-4562-9a27-b985dc3ddc38-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.795721 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e3d825a-0b57-4562-9a27-b985dc3ddc38" (UID: "2e3d825a-0b57-4562-9a27-b985dc3ddc38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.797102 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data" (OuterVolumeSpecName: "config-data") pod "2e3d825a-0b57-4562-9a27-b985dc3ddc38" (UID: "2e3d825a-0b57-4562-9a27-b985dc3ddc38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.894475 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:57 crc kubenswrapper[4727]: I0109 11:07:57.894545 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3d825a-0b57-4562-9a27-b985dc3ddc38-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.255575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd5e3ba1-41fe-4ad8-997a-cae63667c74c","Type":"ContainerStarted","Data":"8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b"} Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.255980 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd5e3ba1-41fe-4ad8-997a-cae63667c74c","Type":"ContainerStarted","Data":"d1a0173db997c0ae943d3dd42cc0514969543ab4509f28fa217bff9b0acb28ed"} Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.261456 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e3d825a-0b57-4562-9a27-b985dc3ddc38","Type":"ContainerDied","Data":"3e496057afd48fb428863c25133769c9e960876cd410faca157a7658ba5d522c"} Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.261493 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.261543 4727 scope.go:117] "RemoveContainer" containerID="8ef205e8c098c840e61d1106089b1ea88e18e5c166804c2a16b2ff04a57642cf" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.286620 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.286584241 podStartE2EDuration="2.286584241s" podCreationTimestamp="2026-01-09 11:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:07:58.277593548 +0000 UTC m=+1323.727498339" watchObservedRunningTime="2026-01-09 11:07:58.286584241 +0000 UTC m=+1323.736489022" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.322724 4727 scope.go:117] "RemoveContainer" containerID="82703af68d86d16fd1f7202c198636b07c35b6c615228d4709e59fe2abd6ff4e" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.323557 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.341594 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.350028 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:58 crc kubenswrapper[4727]: E0109 11:07:58.350731 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.350761 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" Jan 09 11:07:58 crc kubenswrapper[4727]: E0109 11:07:58.350786 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.350795 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.351033 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-api" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.351061 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" containerName="nova-api-log" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.352366 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.356447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.379669 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.406774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.406849 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.407008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.407115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.509447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.509738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.509770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.509878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.510197 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.516654 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.529857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.539000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") pod \"nova-api-0\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.711365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:07:58 crc kubenswrapper[4727]: I0109 11:07:58.875238 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3d825a-0b57-4562-9a27-b985dc3ddc38" path="/var/lib/kubelet/pods/2e3d825a-0b57-4562-9a27-b985dc3ddc38/volumes" Jan 09 11:07:59 crc kubenswrapper[4727]: I0109 11:07:59.281292 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:07:59 crc kubenswrapper[4727]: I0109 11:07:59.912247 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:07:59 crc kubenswrapper[4727]: I0109 11:07:59.916626 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:08:00 crc kubenswrapper[4727]: I0109 11:08:00.292577 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerStarted","Data":"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e"} Jan 09 11:08:00 crc kubenswrapper[4727]: I0109 11:08:00.292618 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerStarted","Data":"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665"} Jan 09 11:08:00 crc kubenswrapper[4727]: I0109 11:08:00.292635 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerStarted","Data":"f526c53e811d823737aee897638a2fd4e604c40040f0dc02dba42bf5050ad7d9"} Jan 09 11:08:00 crc kubenswrapper[4727]: I0109 11:08:00.326068 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.326047625 podStartE2EDuration="2.326047625s" podCreationTimestamp="2026-01-09 11:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:00.324792765 +0000 UTC m=+1325.774697546" watchObservedRunningTime="2026-01-09 11:08:00.326047625 +0000 UTC m=+1325.775952406" Jan 09 11:08:01 crc kubenswrapper[4727]: I0109 11:08:01.643292 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 11:08:04 crc kubenswrapper[4727]: I0109 11:08:04.719375 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 09 11:08:04 crc kubenswrapper[4727]: I0109 11:08:04.907935 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 11:08:04 crc kubenswrapper[4727]: I0109 11:08:04.908089 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 11:08:05 crc kubenswrapper[4727]: I0109 11:08:05.929769 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:05 crc kubenswrapper[4727]: I0109 11:08:05.929779 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:06 crc kubenswrapper[4727]: I0109 11:08:06.643209 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 11:08:06 crc kubenswrapper[4727]: I0109 11:08:06.675109 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 11:08:07 crc kubenswrapper[4727]: I0109 11:08:07.409211 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 11:08:08 crc kubenswrapper[4727]: I0109 11:08:08.712363 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:08 crc kubenswrapper[4727]: I0109 11:08:08.712449 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:09 crc kubenswrapper[4727]: I0109 11:08:09.405323 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:08:09 crc kubenswrapper[4727]: I0109 11:08:09.405419 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:08:09 crc kubenswrapper[4727]: I0109 11:08:09.794792 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:09 crc kubenswrapper[4727]: I0109 11:08:09.794792 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:14 crc kubenswrapper[4727]: I0109 11:08:14.916311 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 11:08:14 crc kubenswrapper[4727]: I0109 11:08:14.918756 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 11:08:14 crc kubenswrapper[4727]: I0109 11:08:14.923198 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 11:08:15 crc kubenswrapper[4727]: I0109 11:08:15.466912 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.280946 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.403761 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") pod \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.404037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") pod \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.404082 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") pod \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\" (UID: \"f916ebd1-61eb-489a-be7d-e2cc06b152b6\") " Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.420721 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk" (OuterVolumeSpecName: "kube-api-access-cfcvk") pod "f916ebd1-61eb-489a-be7d-e2cc06b152b6" (UID: "f916ebd1-61eb-489a-be7d-e2cc06b152b6"). InnerVolumeSpecName "kube-api-access-cfcvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.440449 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f916ebd1-61eb-489a-be7d-e2cc06b152b6" (UID: "f916ebd1-61eb-489a-be7d-e2cc06b152b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.447110 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data" (OuterVolumeSpecName: "config-data") pod "f916ebd1-61eb-489a-be7d-e2cc06b152b6" (UID: "f916ebd1-61eb-489a-be7d-e2cc06b152b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475353 4727 generic.go:334] "Generic (PLEG): container finished" podID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerID="046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" exitCode=137 Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475445 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475491 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f916ebd1-61eb-489a-be7d-e2cc06b152b6","Type":"ContainerDied","Data":"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a"} Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f916ebd1-61eb-489a-be7d-e2cc06b152b6","Type":"ContainerDied","Data":"60bccc0ec47f588ad42cb564633edde3321617957b8b8fda8f4da812cc7b79ef"} Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.475954 4727 scope.go:117] "RemoveContainer" containerID="046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.506191 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.506360 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916ebd1-61eb-489a-be7d-e2cc06b152b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.506421 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfcvk\" (UniqueName: \"kubernetes.io/projected/f916ebd1-61eb-489a-be7d-e2cc06b152b6-kube-api-access-cfcvk\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.530381 4727 scope.go:117] "RemoveContainer" containerID="046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" Jan 09 11:08:16 crc kubenswrapper[4727]: E0109 11:08:16.531221 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a\": container with ID starting with 046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a not found: ID does not exist" containerID="046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.531310 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a"} err="failed to get container status \"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a\": rpc error: code = NotFound desc = could not find container \"046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a\": container with ID starting with 046b14d74aa60c822f6b6926e4c912907b8176ed4e4478857d6264483fe78d7a not found: ID does not exist" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.550798 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.559752 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.576225 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:16 crc kubenswrapper[4727]: E0109 11:08:16.576692 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.576708 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.576906 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.577608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.580827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.580929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.581141 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.619088 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711132 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711322 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpbd\" (UniqueName: \"kubernetes.io/projected/7275705c-d408-4eb4-af28-b9b51403b913-kube-api-access-mbpbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711425 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.711460 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.813930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.814028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.814124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.814164 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.814237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpbd\" (UniqueName: \"kubernetes.io/projected/7275705c-d408-4eb4-af28-b9b51403b913-kube-api-access-mbpbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.818298 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.819635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.819731 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.820201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7275705c-d408-4eb4-af28-b9b51403b913-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.839685 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpbd\" (UniqueName: \"kubernetes.io/projected/7275705c-d408-4eb4-af28-b9b51403b913-kube-api-access-mbpbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"7275705c-d408-4eb4-af28-b9b51403b913\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.872849 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f916ebd1-61eb-489a-be7d-e2cc06b152b6" path="/var/lib/kubelet/pods/f916ebd1-61eb-489a-be7d-e2cc06b152b6/volumes" Jan 09 11:08:16 crc kubenswrapper[4727]: I0109 11:08:16.895855 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:17 crc kubenswrapper[4727]: I0109 11:08:17.408082 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 11:08:17 crc kubenswrapper[4727]: W0109 11:08:17.415419 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7275705c_d408_4eb4_af28_b9b51403b913.slice/crio-5ae197b93c6f26f2aae877fed1c1b66778ef53c91a405dd9c779685b7d8ff80d WatchSource:0}: Error finding container 5ae197b93c6f26f2aae877fed1c1b66778ef53c91a405dd9c779685b7d8ff80d: Status 404 returned error can't find the container with id 5ae197b93c6f26f2aae877fed1c1b66778ef53c91a405dd9c779685b7d8ff80d Jan 09 11:08:17 crc kubenswrapper[4727]: I0109 11:08:17.487001 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7275705c-d408-4eb4-af28-b9b51403b913","Type":"ContainerStarted","Data":"5ae197b93c6f26f2aae877fed1c1b66778ef53c91a405dd9c779685b7d8ff80d"} Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.499259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7275705c-d408-4eb4-af28-b9b51403b913","Type":"ContainerStarted","Data":"ce5d1b45b36b5fa06f2ed56483b6d75519d9dc9bf45a022690ee452f1d296a91"} Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.532708 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.532680602 podStartE2EDuration="2.532680602s" podCreationTimestamp="2026-01-09 11:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:18.524977379 +0000 UTC m=+1343.974882170" watchObservedRunningTime="2026-01-09 11:08:18.532680602 +0000 UTC m=+1343.982585383" Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.717063 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.717847 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.721058 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 11:08:18 crc kubenswrapper[4727]: I0109 11:08:18.722005 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.511271 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.517874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.795066 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.804219 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.833903 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.889779 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.889857 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.890111 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.890184 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.890596 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.890691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.993772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.993830 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.993963 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.994006 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.994057 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.994085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.996313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.997114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.997365 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:19 crc kubenswrapper[4727]: I0109 11:08:19.997675 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.013040 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.032796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") pod \"dnsmasq-dns-59cf4bdb65-dsdfn\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.139816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.578766 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 11:08:20 crc kubenswrapper[4727]: I0109 11:08:20.723457 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:08:20 crc kubenswrapper[4727]: W0109 11:08:20.727241 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aa41a67_4a03_4479_8296_e3e0b3242cc6.slice/crio-4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96 WatchSource:0}: Error finding container 4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96: Status 404 returned error can't find the container with id 4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96 Jan 09 11:08:21 crc kubenswrapper[4727]: I0109 11:08:21.533099 4727 generic.go:334] "Generic (PLEG): container finished" podID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerID="9c4c8b98157f83d68ea66f336ad75ea1176dca583b8fa920a9e02cc7a8302972" exitCode=0 Jan 09 11:08:21 crc kubenswrapper[4727]: I0109 11:08:21.535396 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerDied","Data":"9c4c8b98157f83d68ea66f336ad75ea1176dca583b8fa920a9e02cc7a8302972"} Jan 09 11:08:21 crc kubenswrapper[4727]: I0109 11:08:21.535441 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerStarted","Data":"4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96"} Jan 09 11:08:21 crc kubenswrapper[4727]: I0109 11:08:21.896967 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.240342 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.241211 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-central-agent" containerID="cri-o://d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.241324 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-notification-agent" containerID="cri-o://85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.241326 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="sg-core" containerID="cri-o://b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.241308 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="proxy-httpd" containerID="cri-o://b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.550911 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerStarted","Data":"5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6"} Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.551164 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.557393 4727 generic.go:334] "Generic (PLEG): container finished" podID="255b7479-c152-4860-8978-4a81a53287cc" containerID="b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c" exitCode=0 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.557474 4727 generic.go:334] "Generic (PLEG): container finished" podID="255b7479-c152-4860-8978-4a81a53287cc" containerID="b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7" exitCode=2 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.557469 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c"} Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.557564 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7"} Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.588373 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" podStartSLOduration=3.588345205 podStartE2EDuration="3.588345205s" podCreationTimestamp="2026-01-09 11:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:22.578016989 +0000 UTC m=+1348.027921770" watchObservedRunningTime="2026-01-09 11:08:22.588345205 +0000 UTC m=+1348.038249986" Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.603211 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.604030 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" containerID="cri-o://b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" gracePeriod=30 Jan 09 11:08:22 crc kubenswrapper[4727]: I0109 11:08:22.604210 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" containerID="cri-o://cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" gracePeriod=30 Jan 09 11:08:23 crc kubenswrapper[4727]: I0109 11:08:23.571480 4727 generic.go:334] "Generic (PLEG): container finished" podID="255b7479-c152-4860-8978-4a81a53287cc" containerID="d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca" exitCode=0 Jan 09 11:08:23 crc kubenswrapper[4727]: I0109 11:08:23.571555 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca"} Jan 09 11:08:23 crc kubenswrapper[4727]: I0109 11:08:23.574595 4727 generic.go:334] "Generic (PLEG): container finished" podID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerID="cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" exitCode=143 Jan 09 11:08:23 crc kubenswrapper[4727]: I0109 11:08:23.574666 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerDied","Data":"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665"} Jan 09 11:08:25 crc kubenswrapper[4727]: I0109 11:08:25.621150 4727 generic.go:334] "Generic (PLEG): container finished" podID="255b7479-c152-4860-8978-4a81a53287cc" containerID="85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247" exitCode=0 Jan 09 11:08:25 crc kubenswrapper[4727]: I0109 11:08:25.621262 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247"} Jan 09 11:08:25 crc kubenswrapper[4727]: I0109 11:08:25.952377 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.023789 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.023839 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.023896 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.023945 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.024024 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.024072 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.024115 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.024156 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") pod \"255b7479-c152-4860-8978-4a81a53287cc\" (UID: \"255b7479-c152-4860-8978-4a81a53287cc\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.026054 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.032153 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.080984 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts" (OuterVolumeSpecName: "scripts") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.081069 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8" (OuterVolumeSpecName: "kube-api-access-sh9h8") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "kube-api-access-sh9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.095165 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.122618 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126590 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126631 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh9h8\" (UniqueName: \"kubernetes.io/projected/255b7479-c152-4860-8978-4a81a53287cc-kube-api-access-sh9h8\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126646 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255b7479-c152-4860-8978-4a81a53287cc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126661 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126675 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.126685 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.169173 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.174960 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data" (OuterVolumeSpecName: "config-data") pod "255b7479-c152-4860-8978-4a81a53287cc" (UID: "255b7479-c152-4860-8978-4a81a53287cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.208161 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.250806 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.250855 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255b7479-c152-4860-8978-4a81a53287cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.371700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") pod \"54db797b-aa1b-4b6e-a17f-0287f920392c\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.371841 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") pod \"54db797b-aa1b-4b6e-a17f-0287f920392c\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.371960 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") pod \"54db797b-aa1b-4b6e-a17f-0287f920392c\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.372130 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") pod \"54db797b-aa1b-4b6e-a17f-0287f920392c\" (UID: \"54db797b-aa1b-4b6e-a17f-0287f920392c\") " Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.379052 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs" (OuterVolumeSpecName: "logs") pod "54db797b-aa1b-4b6e-a17f-0287f920392c" (UID: "54db797b-aa1b-4b6e-a17f-0287f920392c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.382995 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54db797b-aa1b-4b6e-a17f-0287f920392c-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.405780 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49" (OuterVolumeSpecName: "kube-api-access-4gn49") pod "54db797b-aa1b-4b6e-a17f-0287f920392c" (UID: "54db797b-aa1b-4b6e-a17f-0287f920392c"). InnerVolumeSpecName "kube-api-access-4gn49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.478732 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54db797b-aa1b-4b6e-a17f-0287f920392c" (UID: "54db797b-aa1b-4b6e-a17f-0287f920392c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.485352 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.485391 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gn49\" (UniqueName: \"kubernetes.io/projected/54db797b-aa1b-4b6e-a17f-0287f920392c-kube-api-access-4gn49\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.501788 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data" (OuterVolumeSpecName: "config-data") pod "54db797b-aa1b-4b6e-a17f-0287f920392c" (UID: "54db797b-aa1b-4b6e-a17f-0287f920392c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.587387 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54db797b-aa1b-4b6e-a17f-0287f920392c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.634720 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255b7479-c152-4860-8978-4a81a53287cc","Type":"ContainerDied","Data":"a325755858225e11102c3b57ad31be80d35da46e13778310a2800ddb5d42db62"} Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.634798 4727 scope.go:117] "RemoveContainer" containerID="b2c3d8c7786b544873f81a08debbe2fed3cf1a5b4b124c78f0a7406dd4c9fc0c" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.634985 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.640228 4727 generic.go:334] "Generic (PLEG): container finished" podID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerID="b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" exitCode=0 Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.640275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerDied","Data":"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e"} Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.640333 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"54db797b-aa1b-4b6e-a17f-0287f920392c","Type":"ContainerDied","Data":"f526c53e811d823737aee897638a2fd4e604c40040f0dc02dba42bf5050ad7d9"} Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.640427 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.659765 4727 scope.go:117] "RemoveContainer" containerID="b4ac3cf8c85926a64015f0b88016993c9b88e946da9fef57320641923d2ea6c7" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.699641 4727 scope.go:117] "RemoveContainer" containerID="85be122de97d65f5f126f01d135c3ce832549ac96681b549ccf5a05617393247" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.704849 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.721444 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.727097 4727 scope.go:117] "RemoveContainer" containerID="d1684b4f1fdfd98833fe8bbadb33021c3bf22ae342d714101bfb025dd74c6cca" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.735252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.746567 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.749681 4727 scope.go:117] "RemoveContainer" containerID="b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771176 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771689 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-central-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771709 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-central-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771724 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="proxy-httpd" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771730 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="proxy-httpd" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771743 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771750 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771764 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771770 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771778 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="sg-core" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771783 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="sg-core" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.771802 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-notification-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771808 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-notification-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771985 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-log" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.771997 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="sg-core" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.772008 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-central-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.772022 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="proxy-httpd" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.772036 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="255b7479-c152-4860-8978-4a81a53287cc" containerName="ceilometer-notification-agent" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.772053 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" containerName="nova-api-api" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.773247 4727 scope.go:117] "RemoveContainer" containerID="cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.773950 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.780691 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.780922 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.781123 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.790707 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.792396 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.799069 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.800814 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.801831 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.802782 4727 scope.go:117] "RemoveContainer" containerID="b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.804221 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e\": container with ID starting with b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e not found: ID does not exist" containerID="b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.804253 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e"} err="failed to get container status \"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e\": rpc error: code = NotFound desc = could not find container \"b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e\": container with ID starting with b38f5ed278613c560c8a7e739bfcfc823ad3d37c36fc78cd792cf5464c0df74e not found: ID does not exist" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.804271 4727 scope.go:117] "RemoveContainer" containerID="cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" Jan 09 11:08:26 crc kubenswrapper[4727]: E0109 11:08:26.804634 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665\": container with ID starting with cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665 not found: ID does not exist" containerID="cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.804656 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665"} err="failed to get container status \"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665\": rpc error: code = NotFound desc = could not find container \"cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665\": container with ID starting with cf62676a0a20b71ec6a579be2e146df76682f96a5e41c42f0558a5f25a8b6665 not found: ID does not exist" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.837271 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.849552 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.873045 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255b7479-c152-4860-8978-4a81a53287cc" path="/var/lib/kubelet/pods/255b7479-c152-4860-8978-4a81a53287cc/volumes" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.873904 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54db797b-aa1b-4b6e-a17f-0287f920392c" path="/var/lib/kubelet/pods/54db797b-aa1b-4b6e-a17f-0287f920392c/volumes" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.893879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.893927 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.893950 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.893968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894041 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-config-data\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894218 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894301 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-scripts\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894726 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-run-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.894915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.895024 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.895233 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-log-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.895331 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9nw\" (UniqueName: \"kubernetes.io/projected/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-kube-api-access-cs9nw\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.902778 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.925365 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997745 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-run-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997874 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997922 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.997972 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-log-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998004 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9nw\" (UniqueName: \"kubernetes.io/projected/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-kube-api-access-cs9nw\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998083 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998139 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-config-data\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998215 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-scripts\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998242 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:26 crc kubenswrapper[4727]: I0109 11:08:26.998391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-run-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.000083 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.000901 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-log-httpd\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.003896 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.003964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.004639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-config-data\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.009295 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.009867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.010787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-scripts\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.013238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.014650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.020899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.021254 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") pod \"nova-api-0\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.025417 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9nw\" (UniqueName: \"kubernetes.io/projected/bc762f8b-1dba-4c4a-bec8-30c9d5b27c24-kube-api-access-cs9nw\") pod \"ceilometer-0\" (UID: \"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24\") " pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.103251 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.133614 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.647550 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.650666 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.678090 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.806244 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.945361 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.959420 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.964586 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.965642 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 09 11:08:27 crc kubenswrapper[4727]: I0109 11:08:27.972281 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.131330 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.131389 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.131490 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.131728 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.233852 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.233955 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.233985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.234029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.239159 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.240036 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.241634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.255869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") pod \"nova-cell1-cell-mapping-wtb77\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.299336 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.679091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerStarted","Data":"ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.679155 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerStarted","Data":"9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.679167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerStarted","Data":"421034a4e0c580642b2ba309c9af86d09352bc6febabbaafe996bdae2b0a1dad"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.682974 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"314afaaa031c74fb8921e0263a22e087b8f7c777c96e18d6d855040c57e2fd64"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.683043 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"92f749d97d787c6f55364a03df030025cfb62a1778b49399ed602f4bcf18c667"} Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.711312 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.711287842 podStartE2EDuration="2.711287842s" podCreationTimestamp="2026-01-09 11:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:28.701178412 +0000 UTC m=+1354.151083193" watchObservedRunningTime="2026-01-09 11:08:28.711287842 +0000 UTC m=+1354.161192623" Jan 09 11:08:28 crc kubenswrapper[4727]: I0109 11:08:28.970690 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:08:29 crc kubenswrapper[4727]: I0109 11:08:29.738994 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtb77" event={"ID":"fd540af1-9862-4759-ad16-587bbd49fea1","Type":"ContainerStarted","Data":"2149f5b1c0ab1c82602e241d07a77642b5d9e612402ac4639e68a30682922072"} Jan 09 11:08:29 crc kubenswrapper[4727]: I0109 11:08:29.739481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtb77" event={"ID":"fd540af1-9862-4759-ad16-587bbd49fea1","Type":"ContainerStarted","Data":"cdb5199777c08eb82f009b4267902debd8ae6355ee99bf36fd992bb76e143bcb"} Jan 09 11:08:29 crc kubenswrapper[4727]: I0109 11:08:29.756777 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"0c2e19067acce7276f037db6618440ecb38f0b2632681376182d9d037b6ae398"} Jan 09 11:08:29 crc kubenswrapper[4727]: I0109 11:08:29.812859 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wtb77" podStartSLOduration=2.812837657 podStartE2EDuration="2.812837657s" podCreationTimestamp="2026-01-09 11:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:29.774057787 +0000 UTC m=+1355.223962588" watchObservedRunningTime="2026-01-09 11:08:29.812837657 +0000 UTC m=+1355.262742428" Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.150711 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.241831 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.242440 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="dnsmasq-dns" containerID="cri-o://e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01" gracePeriod=10 Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.780122 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"8d33d3481591d48d40b9b44f2b11f796c92f8ba863cf0bd3de919fb6b2ea963f"} Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.788898 4727 generic.go:334] "Generic (PLEG): container finished" podID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerID="e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01" exitCode=0 Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.789920 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerDied","Data":"e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01"} Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.789944 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" event={"ID":"0ad24155-2081-4c95-b3ba-2217f670d8b4","Type":"ContainerDied","Data":"c715a92f5aa615c93db65f6e9d930c15cd9844cbd3158043d67b9b3325878e65"} Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.789956 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c715a92f5aa615c93db65f6e9d930c15cd9844cbd3158043d67b9b3325878e65" Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.819490 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.918504 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.918930 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.919132 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.919267 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.919336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.919453 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") pod \"0ad24155-2081-4c95-b3ba-2217f670d8b4\" (UID: \"0ad24155-2081-4c95-b3ba-2217f670d8b4\") " Jan 09 11:08:30 crc kubenswrapper[4727]: I0109 11:08:30.944804 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7" (OuterVolumeSpecName: "kube-api-access-mdlw7") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "kube-api-access-mdlw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.003088 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.004673 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.010661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config" (OuterVolumeSpecName: "config") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.022262 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.022310 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.022322 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdlw7\" (UniqueName: \"kubernetes.io/projected/0ad24155-2081-4c95-b3ba-2217f670d8b4-kube-api-access-mdlw7\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.022331 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.023372 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.079203 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ad24155-2081-4c95-b3ba-2217f670d8b4" (UID: "0ad24155-2081-4c95-b3ba-2217f670d8b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.124414 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.124488 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ad24155-2081-4c95-b3ba-2217f670d8b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.798936 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-jqnl8" Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.851645 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:08:31 crc kubenswrapper[4727]: I0109 11:08:31.872817 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-jqnl8"] Jan 09 11:08:32 crc kubenswrapper[4727]: I0109 11:08:32.904760 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" path="/var/lib/kubelet/pods/0ad24155-2081-4c95-b3ba-2217f670d8b4/volumes" Jan 09 11:08:35 crc kubenswrapper[4727]: I0109 11:08:35.848105 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc762f8b-1dba-4c4a-bec8-30c9d5b27c24","Type":"ContainerStarted","Data":"3a3c3a2e4e13e025a46effc4d82811518a3cc554f573b4967222503a57c1f202"} Jan 09 11:08:35 crc kubenswrapper[4727]: I0109 11:08:35.849106 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 11:08:35 crc kubenswrapper[4727]: I0109 11:08:35.891200 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1852602819999998 podStartE2EDuration="9.891169055s" podCreationTimestamp="2026-01-09 11:08:26 +0000 UTC" firstStartedPulling="2026-01-09 11:08:27.650326801 +0000 UTC m=+1353.100231582" lastFinishedPulling="2026-01-09 11:08:35.356235574 +0000 UTC m=+1360.806140355" observedRunningTime="2026-01-09 11:08:35.877864919 +0000 UTC m=+1361.327769700" watchObservedRunningTime="2026-01-09 11:08:35.891169055 +0000 UTC m=+1361.341073846" Jan 09 11:08:36 crc kubenswrapper[4727]: I0109 11:08:36.860494 4727 generic.go:334] "Generic (PLEG): container finished" podID="fd540af1-9862-4759-ad16-587bbd49fea1" containerID="2149f5b1c0ab1c82602e241d07a77642b5d9e612402ac4639e68a30682922072" exitCode=0 Jan 09 11:08:36 crc kubenswrapper[4727]: I0109 11:08:36.873111 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtb77" event={"ID":"fd540af1-9862-4759-ad16-587bbd49fea1","Type":"ContainerDied","Data":"2149f5b1c0ab1c82602e241d07a77642b5d9e612402ac4639e68a30682922072"} Jan 09 11:08:37 crc kubenswrapper[4727]: I0109 11:08:37.134767 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:37 crc kubenswrapper[4727]: I0109 11:08:37.135117 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.154074 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.155035 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.316639 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.389896 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") pod \"fd540af1-9862-4759-ad16-587bbd49fea1\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.390262 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") pod \"fd540af1-9862-4759-ad16-587bbd49fea1\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.390332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") pod \"fd540af1-9862-4759-ad16-587bbd49fea1\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.390460 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") pod \"fd540af1-9862-4759-ad16-587bbd49fea1\" (UID: \"fd540af1-9862-4759-ad16-587bbd49fea1\") " Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.398827 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb" (OuterVolumeSpecName: "kube-api-access-5f9kb") pod "fd540af1-9862-4759-ad16-587bbd49fea1" (UID: "fd540af1-9862-4759-ad16-587bbd49fea1"). InnerVolumeSpecName "kube-api-access-5f9kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.407824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts" (OuterVolumeSpecName: "scripts") pod "fd540af1-9862-4759-ad16-587bbd49fea1" (UID: "fd540af1-9862-4759-ad16-587bbd49fea1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.447906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data" (OuterVolumeSpecName: "config-data") pod "fd540af1-9862-4759-ad16-587bbd49fea1" (UID: "fd540af1-9862-4759-ad16-587bbd49fea1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.458602 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd540af1-9862-4759-ad16-587bbd49fea1" (UID: "fd540af1-9862-4759-ad16-587bbd49fea1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.494853 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.495252 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f9kb\" (UniqueName: \"kubernetes.io/projected/fd540af1-9862-4759-ad16-587bbd49fea1-kube-api-access-5f9kb\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.495347 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.495413 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd540af1-9862-4759-ad16-587bbd49fea1-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.911074 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtb77" event={"ID":"fd540af1-9862-4759-ad16-587bbd49fea1","Type":"ContainerDied","Data":"cdb5199777c08eb82f009b4267902debd8ae6355ee99bf36fd992bb76e143bcb"} Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.911670 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdb5199777c08eb82f009b4267902debd8ae6355ee99bf36fd992bb76e143bcb" Jan 09 11:08:38 crc kubenswrapper[4727]: I0109 11:08:38.911170 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtb77" Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.095922 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.096361 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" containerID="cri-o://9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.096598 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" containerID="cri-o://ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.145068 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.145535 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" containerID="cri-o://e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.145756 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" containerID="cri-o://64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.170744 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.171032 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" containerID="cri-o://8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" gracePeriod=30 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.404950 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.405041 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.922775 4727 generic.go:334] "Generic (PLEG): container finished" podID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerID="9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180" exitCode=143 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.922886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerDied","Data":"9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180"} Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.925421 4727 generic.go:334] "Generic (PLEG): container finished" podID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerID="e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" exitCode=143 Jan 09 11:08:39 crc kubenswrapper[4727]: I0109 11:08:39.925477 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerDied","Data":"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb"} Jan 09 11:08:41 crc kubenswrapper[4727]: E0109 11:08:41.644259 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b is running failed: container process not found" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 09 11:08:41 crc kubenswrapper[4727]: E0109 11:08:41.645445 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b is running failed: container process not found" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 09 11:08:41 crc kubenswrapper[4727]: E0109 11:08:41.646019 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b is running failed: container process not found" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 09 11:08:41 crc kubenswrapper[4727]: E0109 11:08:41.646069 4727 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" Jan 09 11:08:41 crc kubenswrapper[4727]: I0109 11:08:41.949971 4727 generic.go:334] "Generic (PLEG): container finished" podID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" exitCode=0 Jan 09 11:08:41 crc kubenswrapper[4727]: I0109 11:08:41.950054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd5e3ba1-41fe-4ad8-997a-cae63667c74c","Type":"ContainerDied","Data":"8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b"} Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.317064 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:34826->10.217.0.198:8775: read: connection reset by peer" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.317069 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:34812->10.217.0.198:8775: read: connection reset by peer" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.559566 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.704786 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") pod \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.705033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") pod \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.705154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") pod \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\" (UID: \"bd5e3ba1-41fe-4ad8-997a-cae63667c74c\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.713009 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl" (OuterVolumeSpecName: "kube-api-access-jnnfl") pod "bd5e3ba1-41fe-4ad8-997a-cae63667c74c" (UID: "bd5e3ba1-41fe-4ad8-997a-cae63667c74c"). InnerVolumeSpecName "kube-api-access-jnnfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.741857 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd5e3ba1-41fe-4ad8-997a-cae63667c74c" (UID: "bd5e3ba1-41fe-4ad8-997a-cae63667c74c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.750888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data" (OuterVolumeSpecName: "config-data") pod "bd5e3ba1-41fe-4ad8-997a-cae63667c74c" (UID: "bd5e3ba1-41fe-4ad8-997a-cae63667c74c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.811475 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnnfl\" (UniqueName: \"kubernetes.io/projected/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-kube-api-access-jnnfl\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.811530 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.811549 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e3ba1-41fe-4ad8-997a-cae63667c74c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.823087 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913371 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913486 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913870 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.913929 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") pod \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\" (UID: \"3b8ddc88-eab5-4564-a55d-aafb1d7084d2\") " Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.914783 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs" (OuterVolumeSpecName: "logs") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.918985 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh" (OuterVolumeSpecName: "kube-api-access-2nmfh") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "kube-api-access-2nmfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.945457 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.967380 4727 generic.go:334] "Generic (PLEG): container finished" podID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerID="64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" exitCode=0 Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.967488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerDied","Data":"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d"} Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.968444 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b8ddc88-eab5-4564-a55d-aafb1d7084d2","Type":"ContainerDied","Data":"2e10e8e795ff975c0508e9bcbbece45ba505b4a74b5775037e57f3ba76b06c91"} Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.968482 4727 scope.go:117] "RemoveContainer" containerID="64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.974083 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.975612 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data" (OuterVolumeSpecName: "config-data") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.980335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bd5e3ba1-41fe-4ad8-997a-cae63667c74c","Type":"ContainerDied","Data":"d1a0173db997c0ae943d3dd42cc0514969543ab4509f28fa217bff9b0acb28ed"} Jan 09 11:08:42 crc kubenswrapper[4727]: I0109 11:08:42.980433 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.007639 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3b8ddc88-eab5-4564-a55d-aafb1d7084d2" (UID: "3b8ddc88-eab5-4564-a55d-aafb1d7084d2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016162 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nmfh\" (UniqueName: \"kubernetes.io/projected/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-kube-api-access-2nmfh\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016199 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016209 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016220 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.016230 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8ddc88-eab5-4564-a55d-aafb1d7084d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.071780 4727 scope.go:117] "RemoveContainer" containerID="e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.094169 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.179892 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.183188 4727 scope.go:117] "RemoveContainer" containerID="64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.184192 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d\": container with ID starting with 64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d not found: ID does not exist" containerID="64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.184257 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d"} err="failed to get container status \"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d\": rpc error: code = NotFound desc = could not find container \"64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d\": container with ID starting with 64099a8c33dbf6c3ff6470c09ab701f8a2cf4c0888da9fea0f3646c84186a22d not found: ID does not exist" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.184301 4727 scope.go:117] "RemoveContainer" containerID="e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.184994 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb\": container with ID starting with e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb not found: ID does not exist" containerID="e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.185040 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb"} err="failed to get container status \"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb\": rpc error: code = NotFound desc = could not find container \"e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb\": container with ID starting with e7adc35848f7450f63792e4fc2c6d031c36918540cd9add794dda558f78d8afb not found: ID does not exist" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.185060 4727 scope.go:117] "RemoveContainer" containerID="8ad3319393c1a233aaad804cb30cf66220f7b87d8593dedaa9f0b6db6db44e5b" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198101 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198828 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198846 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198862 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="dnsmasq-dns" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198868 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="dnsmasq-dns" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198892 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198899 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198912 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="init" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198918 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="init" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198928 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd540af1-9862-4759-ad16-587bbd49fea1" containerName="nova-manage" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198934 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd540af1-9862-4759-ad16-587bbd49fea1" containerName="nova-manage" Jan 09 11:08:43 crc kubenswrapper[4727]: E0109 11:08:43.198969 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.198978 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199153 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad24155-2081-4c95-b3ba-2217f670d8b4" containerName="dnsmasq-dns" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199162 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" containerName="nova-scheduler-scheduler" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199184 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-metadata" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199195 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd540af1-9862-4759-ad16-587bbd49fea1" containerName="nova-manage" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.199208 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" containerName="nova-metadata-log" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.200007 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.202162 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.213448 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.313406 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.322451 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.328740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.328796 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmcz\" (UniqueName: \"kubernetes.io/projected/1203f055-468b-48e1-b859-78a4d11d5034-kube-api-access-cmmcz\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.328833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-config-data\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.350355 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.352295 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.355544 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.355857 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.376858 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.430590 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.430642 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6024d35-671e-4814-9c13-de9897a984ee-logs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.430681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcpg\" (UniqueName: \"kubernetes.io/projected/c6024d35-671e-4814-9c13-de9897a984ee-kube-api-access-hrcpg\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431177 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-config-data\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431339 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431415 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmcz\" (UniqueName: \"kubernetes.io/projected/1203f055-468b-48e1-b859-78a4d11d5034-kube-api-access-cmmcz\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.431528 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-config-data\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.436370 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-config-data\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.436561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1203f055-468b-48e1-b859-78a4d11d5034-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.454000 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmcz\" (UniqueName: \"kubernetes.io/projected/1203f055-468b-48e1-b859-78a4d11d5034-kube-api-access-cmmcz\") pod \"nova-scheduler-0\" (UID: \"1203f055-468b-48e1-b859-78a4d11d5034\") " pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.529242 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534048 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcpg\" (UniqueName: \"kubernetes.io/projected/c6024d35-671e-4814-9c13-de9897a984ee-kube-api-access-hrcpg\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-config-data\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534268 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6024d35-671e-4814-9c13-de9897a984ee-logs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.534337 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.537683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6024d35-671e-4814-9c13-de9897a984ee-logs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.538880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.540054 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.543301 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6024d35-671e-4814-9c13-de9897a984ee-config-data\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.563325 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcpg\" (UniqueName: \"kubernetes.io/projected/c6024d35-671e-4814-9c13-de9897a984ee-kube-api-access-hrcpg\") pod \"nova-metadata-0\" (UID: \"c6024d35-671e-4814-9c13-de9897a984ee\") " pod="openstack/nova-metadata-0" Jan 09 11:08:43 crc kubenswrapper[4727]: I0109 11:08:43.687185 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 11:08:44 crc kubenswrapper[4727]: I0109 11:08:44.107958 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 11:08:44 crc kubenswrapper[4727]: I0109 11:08:44.228486 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 11:08:44 crc kubenswrapper[4727]: W0109 11:08:44.233467 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6024d35_671e_4814_9c13_de9897a984ee.slice/crio-fcf6b5aa2b5fa089786a3db3d8ba436dcb39cc1d49438806e614d7bb2c244d32 WatchSource:0}: Error finding container fcf6b5aa2b5fa089786a3db3d8ba436dcb39cc1d49438806e614d7bb2c244d32: Status 404 returned error can't find the container with id fcf6b5aa2b5fa089786a3db3d8ba436dcb39cc1d49438806e614d7bb2c244d32 Jan 09 11:08:44 crc kubenswrapper[4727]: I0109 11:08:44.876043 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8ddc88-eab5-4564-a55d-aafb1d7084d2" path="/var/lib/kubelet/pods/3b8ddc88-eab5-4564-a55d-aafb1d7084d2/volumes" Jan 09 11:08:44 crc kubenswrapper[4727]: I0109 11:08:44.877179 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5e3ba1-41fe-4ad8-997a-cae63667c74c" path="/var/lib/kubelet/pods/bd5e3ba1-41fe-4ad8-997a-cae63667c74c/volumes" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.007980 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6024d35-671e-4814-9c13-de9897a984ee","Type":"ContainerStarted","Data":"eb77879a9872318ce0bcd8eba66410cbc7a94538274be7a56f2b2430825c33c8"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.008407 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6024d35-671e-4814-9c13-de9897a984ee","Type":"ContainerStarted","Data":"96eb51f266c155d1a08f738b33bc4ed9f8d9117193d99b3d916d10faebe405f7"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.008489 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6024d35-671e-4814-9c13-de9897a984ee","Type":"ContainerStarted","Data":"fcf6b5aa2b5fa089786a3db3d8ba436dcb39cc1d49438806e614d7bb2c244d32"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.010425 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1203f055-468b-48e1-b859-78a4d11d5034","Type":"ContainerStarted","Data":"e2d3ff1b5df6379d7a8debd96fe2ecc4093799357bb5247dff9f51e8c37fcc10"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.010473 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1203f055-468b-48e1-b859-78a4d11d5034","Type":"ContainerStarted","Data":"2fc2590206384788b14d3492c5b28d63b1dd46fbffbf24870eac90278edc0e95"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.013574 4727 generic.go:334] "Generic (PLEG): container finished" podID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerID="ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30" exitCode=0 Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.013612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerDied","Data":"ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.013631 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb84a78-3539-489f-a5d0-417c0c2f1e4d","Type":"ContainerDied","Data":"421034a4e0c580642b2ba309c9af86d09352bc6febabbaafe996bdae2b0a1dad"} Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.013647 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421034a4e0c580642b2ba309c9af86d09352bc6febabbaafe996bdae2b0a1dad" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.041634 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.041613466 podStartE2EDuration="2.041613466s" podCreationTimestamp="2026-01-09 11:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:45.034347543 +0000 UTC m=+1370.484252344" watchObservedRunningTime="2026-01-09 11:08:45.041613466 +0000 UTC m=+1370.491518247" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.048064 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.055146 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.055123977 podStartE2EDuration="2.055123977s" podCreationTimestamp="2026-01-09 11:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:45.050553688 +0000 UTC m=+1370.500458469" watchObservedRunningTime="2026-01-09 11:08:45.055123977 +0000 UTC m=+1370.505028758" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.071261 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.071359 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.071437 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.071476 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.073866 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs" (OuterVolumeSpecName: "logs") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.076716 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.076779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") pod \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\" (UID: \"deb84a78-3539-489f-a5d0-417c0c2f1e4d\") " Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.077818 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb84a78-3539-489f-a5d0-417c0c2f1e4d-logs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.079928 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74" (OuterVolumeSpecName: "kube-api-access-9cq74") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "kube-api-access-9cq74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.105570 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.116033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data" (OuterVolumeSpecName: "config-data") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.136734 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.153190 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "deb84a78-3539-489f-a5d0-417c0c2f1e4d" (UID: "deb84a78-3539-489f-a5d0-417c0c2f1e4d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.180709 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.180969 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.181131 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.181209 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/deb84a78-3539-489f-a5d0-417c0c2f1e4d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:45 crc kubenswrapper[4727]: I0109 11:08:45.181267 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/deb84a78-3539-489f-a5d0-417c0c2f1e4d-kube-api-access-9cq74\") on node \"crc\" DevicePath \"\"" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.023885 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.060041 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.071696 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.098895 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:46 crc kubenswrapper[4727]: E0109 11:08:46.099291 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.099309 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" Jan 09 11:08:46 crc kubenswrapper[4727]: E0109 11:08:46.099337 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.099345 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.099545 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-log" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.099576 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" containerName="nova-api-api" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.100589 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.106350 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.106476 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.106643 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.118554 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.220894 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdfq\" (UniqueName: \"kubernetes.io/projected/7bfcd192-734d-4709-b2c3-9abafc15a30e-kube-api-access-vkdfq\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221009 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-config-data\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfcd192-734d-4709-b2c3-9abafc15a30e-logs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-public-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.221443 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323036 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-config-data\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfcd192-734d-4709-b2c3-9abafc15a30e-logs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323144 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-public-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdfq\" (UniqueName: \"kubernetes.io/projected/7bfcd192-734d-4709-b2c3-9abafc15a30e-kube-api-access-vkdfq\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.323678 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfcd192-734d-4709-b2c3-9abafc15a30e-logs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.328736 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-config-data\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.329762 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.330153 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.330193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bfcd192-734d-4709-b2c3-9abafc15a30e-public-tls-certs\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.342787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdfq\" (UniqueName: \"kubernetes.io/projected/7bfcd192-734d-4709-b2c3-9abafc15a30e-kube-api-access-vkdfq\") pod \"nova-api-0\" (UID: \"7bfcd192-734d-4709-b2c3-9abafc15a30e\") " pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.426177 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.873469 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb84a78-3539-489f-a5d0-417c0c2f1e4d" path="/var/lib/kubelet/pods/deb84a78-3539-489f-a5d0-417c0c2f1e4d/volumes" Jan 09 11:08:46 crc kubenswrapper[4727]: I0109 11:08:46.902782 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 11:08:47 crc kubenswrapper[4727]: I0109 11:08:47.036103 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfcd192-734d-4709-b2c3-9abafc15a30e","Type":"ContainerStarted","Data":"9297c6cd4fca3b2bb3119fc5e11df9bcc876ae93f9174e875ab5072c4e2dcaa1"} Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.049012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfcd192-734d-4709-b2c3-9abafc15a30e","Type":"ContainerStarted","Data":"adf2307c5f35eee090c67df22f14204ab8e5426b7fefd531becc7898a9f485c3"} Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.049347 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfcd192-734d-4709-b2c3-9abafc15a30e","Type":"ContainerStarted","Data":"de6ca1e17c531f8d9812bdd5ae78b5648b4ab7e7f290693c06458bc8db3857df"} Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.085407 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.085378514 podStartE2EDuration="2.085378514s" podCreationTimestamp="2026-01-09 11:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:08:48.076641246 +0000 UTC m=+1373.526546047" watchObservedRunningTime="2026-01-09 11:08:48.085378514 +0000 UTC m=+1373.535283295" Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.529441 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.688294 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:08:48 crc kubenswrapper[4727]: I0109 11:08:48.688459 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 11:08:53 crc kubenswrapper[4727]: I0109 11:08:53.530102 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 11:08:53 crc kubenswrapper[4727]: I0109 11:08:53.563615 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 11:08:53 crc kubenswrapper[4727]: I0109 11:08:53.688020 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 11:08:53 crc kubenswrapper[4727]: I0109 11:08:53.688081 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 11:08:54 crc kubenswrapper[4727]: I0109 11:08:54.135384 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 11:08:54 crc kubenswrapper[4727]: I0109 11:08:54.700856 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c6024d35-671e-4814-9c13-de9897a984ee" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:54 crc kubenswrapper[4727]: I0109 11:08:54.701326 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c6024d35-671e-4814-9c13-de9897a984ee" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:56 crc kubenswrapper[4727]: I0109 11:08:56.426799 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:56 crc kubenswrapper[4727]: I0109 11:08:56.427476 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 11:08:57 crc kubenswrapper[4727]: I0109 11:08:57.115543 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 11:08:57 crc kubenswrapper[4727]: I0109 11:08:57.445676 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7bfcd192-734d-4709-b2c3-9abafc15a30e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:08:57 crc kubenswrapper[4727]: I0109 11:08:57.445714 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7bfcd192-734d-4709-b2c3-9abafc15a30e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 09 11:09:03 crc kubenswrapper[4727]: I0109 11:09:03.697879 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 11:09:03 crc kubenswrapper[4727]: I0109 11:09:03.698670 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 11:09:03 crc kubenswrapper[4727]: I0109 11:09:03.707131 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 11:09:03 crc kubenswrapper[4727]: I0109 11:09:03.709193 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 11:09:06 crc kubenswrapper[4727]: I0109 11:09:06.433313 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 11:09:06 crc kubenswrapper[4727]: I0109 11:09:06.434803 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 11:09:06 crc kubenswrapper[4727]: I0109 11:09:06.434837 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 11:09:06 crc kubenswrapper[4727]: I0109 11:09:06.445373 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 11:09:07 crc kubenswrapper[4727]: I0109 11:09:07.262628 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 11:09:07 crc kubenswrapper[4727]: I0109 11:09:07.270865 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.405114 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.405612 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.405673 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.406638 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:09:09 crc kubenswrapper[4727]: I0109 11:09:09.406710 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a" gracePeriod=600 Jan 09 11:09:10 crc kubenswrapper[4727]: I0109 11:09:10.296120 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a" exitCode=0 Jan 09 11:09:10 crc kubenswrapper[4727]: I0109 11:09:10.296179 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a"} Jan 09 11:09:10 crc kubenswrapper[4727]: I0109 11:09:10.297043 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019"} Jan 09 11:09:10 crc kubenswrapper[4727]: I0109 11:09:10.297076 4727 scope.go:117] "RemoveContainer" containerID="3c04d245b7cdab72548d43a943c79e33857b9a9a70781338e853db9654f0dd7c" Jan 09 11:09:15 crc kubenswrapper[4727]: I0109 11:09:15.779274 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:16 crc kubenswrapper[4727]: I0109 11:09:16.809332 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:20 crc kubenswrapper[4727]: I0109 11:09:20.883563 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="rabbitmq" containerID="cri-o://9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633" gracePeriod=604795 Jan 09 11:09:21 crc kubenswrapper[4727]: I0109 11:09:21.695078 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" containerID="cri-o://6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" gracePeriod=604796 Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.511632 4727 generic.go:334] "Generic (PLEG): container finished" podID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerID="9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633" exitCode=0 Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.511697 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerDied","Data":"9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633"} Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.512457 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7a0dc55-5ff9-4b69-8b54-a124f04e383e","Type":"ContainerDied","Data":"992da0c7f6705ab24fafadc1d428d6d6e4d619876e23e4c5406d83cc5794cf74"} Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.512475 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992da0c7f6705ab24fafadc1d428d6d6e4d619876e23e4c5406d83cc5794cf74" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.584850 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718215 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718269 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718288 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718330 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718440 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718487 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718564 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718590 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.718675 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") pod \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\" (UID: \"e7a0dc55-5ff9-4b69-8b54-a124f04e383e\") " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.719839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.720181 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.724100 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.730815 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.733187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.744837 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96" (OuterVolumeSpecName: "kube-api-access-bfc96") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "kube-api-access-bfc96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.745376 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info" (OuterVolumeSpecName: "pod-info") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.753912 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.792151 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data" (OuterVolumeSpecName: "config-data") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.819067 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf" (OuterVolumeSpecName: "server-conf") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821331 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821383 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfc96\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-kube-api-access-bfc96\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821397 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821407 4727 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821417 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821425 4727 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821470 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821481 4727 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821491 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.821500 4727 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.869143 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.890906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e7a0dc55-5ff9-4b69-8b54-a124f04e383e" (UID: "e7a0dc55-5ff9-4b69-8b54-a124f04e383e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.924088 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7a0dc55-5ff9-4b69-8b54-a124f04e383e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:27 crc kubenswrapper[4727]: I0109 11:09:27.924269 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.357614 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440040 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440471 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440555 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440690 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440733 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440772 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440809 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440871 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440911 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.440944 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") pod \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\" (UID: \"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60\") " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.441271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.441609 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.443202 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.444325 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.450632 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.451083 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.453584 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.456781 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv" (OuterVolumeSpecName: "kube-api-access-r8mrv") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "kube-api-access-r8mrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.461639 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info" (OuterVolumeSpecName: "pod-info") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.485297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data" (OuterVolumeSpecName: "config-data") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523273 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerID="6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" exitCode=0 Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523335 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523378 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerDied","Data":"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56"} Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523443 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a6a64ec-e743-4fa7-8e3e-5f628ebeea60","Type":"ContainerDied","Data":"db17648fc3f40a57307203f5c840db822e3e04b15d7210b6d21d30d0fcfddd75"} Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.523468 4727 scope.go:117] "RemoveContainer" containerID="6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.532820 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf" (OuterVolumeSpecName: "server-conf") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545217 4727 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545247 4727 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-pod-info\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545255 4727 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-server-conf\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545266 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545278 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545306 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545315 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545328 4727 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.545337 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8mrv\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-kube-api-access-r8mrv\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.571969 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" (UID: "2a6a64ec-e743-4fa7-8e3e-5f628ebeea60"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.581382 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.592934 4727 scope.go:117] "RemoveContainer" containerID="fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.601340 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.615132 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.630689 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.631273 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="setup-container" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631300 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="setup-container" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.631333 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="setup-container" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631342 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="setup-container" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.631368 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631377 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.631395 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631403 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631664 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.631695 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.633100 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.633607 4727 scope.go:117] "RemoveContainer" containerID="6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.634220 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56\": container with ID starting with 6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56 not found: ID does not exist" containerID="6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.634259 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56"} err="failed to get container status \"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56\": rpc error: code = NotFound desc = could not find container \"6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56\": container with ID starting with 6c054f8feba5974adbad5033205d9477244dad733fc0df563ac0c420ab5dbf56 not found: ID does not exist" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.634282 4727 scope.go:117] "RemoveContainer" containerID="fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3" Jan 09 11:09:28 crc kubenswrapper[4727]: E0109 11:09:28.634812 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3\": container with ID starting with fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3 not found: ID does not exist" containerID="fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.634843 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3"} err="failed to get container status \"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3\": rpc error: code = NotFound desc = could not find container \"fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3\": container with ID starting with fe061c88b899f791609f45b5d6543c0f7e04c18984f794cd732270e162d10cf3 not found: ID does not exist" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.637191 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638565 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638925 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638592 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xx2j9" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638724 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.638762 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.644326 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.647845 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.647877 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.651266 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749299 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749595 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749695 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwptt\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-kube-api-access-jwptt\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749769 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.749878 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750009 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750102 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750244 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.750373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852633 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwptt\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-kube-api-access-jwptt\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852789 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852823 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852841 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.852915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.854459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.857864 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.858083 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.858587 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.858747 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.860767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.861272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.865307 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.873211 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.874074 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.882614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwptt\" (UniqueName: \"kubernetes.io/projected/bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9-kube-api-access-jwptt\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.887214 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a0dc55-5ff9-4b69-8b54-a124f04e383e" path="/var/lib/kubelet/pods/e7a0dc55-5ff9-4b69-8b54-a124f04e383e/volumes" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.892718 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.906238 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.917129 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.919257 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.924471 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.924715 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925317 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925449 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925582 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-j7rc6" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925716 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.925931 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.926202 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.933332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9\") " pod="openstack/rabbitmq-server-0" Jan 09 11:09:28 crc kubenswrapper[4727]: I0109 11:09:28.978976 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.081741 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a49793da-9c08-47ea-892e-fe9e5b16d309-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082397 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082523 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082573 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082682 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a49793da-9c08-47ea-892e-fe9e5b16d309-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082848 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.082880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mntlh\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-kube-api-access-mntlh\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185697 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185740 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185818 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185844 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a49793da-9c08-47ea-892e-fe9e5b16d309-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185916 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.185986 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mntlh\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-kube-api-access-mntlh\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.186072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.186141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a49793da-9c08-47ea-892e-fe9e5b16d309-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.187053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.187164 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.187719 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.187778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a49793da-9c08-47ea-892e-fe9e5b16d309-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.188271 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.188644 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.193836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a49793da-9c08-47ea-892e-fe9e5b16d309-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.194405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.195615 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.196457 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a49793da-9c08-47ea-892e-fe9e5b16d309-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.207366 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mntlh\" (UniqueName: \"kubernetes.io/projected/a49793da-9c08-47ea-892e-fe9e5b16d309-kube-api-access-mntlh\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.244599 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a49793da-9c08-47ea-892e-fe9e5b16d309\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.292876 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.399696 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.401531 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.404407 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.419310 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494431 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494495 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494560 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.494634 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.495158 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.558936 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 11:09:29 crc kubenswrapper[4727]: W0109 11:09:29.562398 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf1c8d7_2c22_41a5_a1fc_64e9c35bacb9.slice/crio-9eb30f234907e59a5d80d3c2706f2c2a6dab4cbd855c7c11f15a477830302037 WatchSource:0}: Error finding container 9eb30f234907e59a5d80d3c2706f2c2a6dab4cbd855c7c11f15a477830302037: Status 404 returned error can't find the container with id 9eb30f234907e59a5d80d3c2706f2c2a6dab4cbd855c7c11f15a477830302037 Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597576 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597690 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597715 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597756 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.597779 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.598292 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.598891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.598964 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.599005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.599818 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.599910 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.618549 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") pod \"dnsmasq-dns-67b789f86c-4srmt\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.738553 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:29 crc kubenswrapper[4727]: I0109 11:09:29.868066 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 11:09:29 crc kubenswrapper[4727]: W0109 11:09:29.869358 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda49793da_9c08_47ea_892e_fe9e5b16d309.slice/crio-d68d70ae89bd9e66cd90b0bd8557835f760cceb9f6489c4fce1dc03be2c45f12 WatchSource:0}: Error finding container d68d70ae89bd9e66cd90b0bd8557835f760cceb9f6489c4fce1dc03be2c45f12: Status 404 returned error can't find the container with id d68d70ae89bd9e66cd90b0bd8557835f760cceb9f6489c4fce1dc03be2c45f12 Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.252855 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:30 crc kubenswrapper[4727]: W0109 11:09:30.260697 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1b5611d_b2c2_4a4a_897c_7f37995529cd.slice/crio-b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed WatchSource:0}: Error finding container b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed: Status 404 returned error can't find the container with id b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.548275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9","Type":"ContainerStarted","Data":"9eb30f234907e59a5d80d3c2706f2c2a6dab4cbd855c7c11f15a477830302037"} Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.550098 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerStarted","Data":"b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed"} Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.551370 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a49793da-9c08-47ea-892e-fe9e5b16d309","Type":"ContainerStarted","Data":"d68d70ae89bd9e66cd90b0bd8557835f760cceb9f6489c4fce1dc03be2c45f12"} Jan 09 11:09:30 crc kubenswrapper[4727]: I0109 11:09:30.874073 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" path="/var/lib/kubelet/pods/2a6a64ec-e743-4fa7-8e3e-5f628ebeea60/volumes" Jan 09 11:09:31 crc kubenswrapper[4727]: I0109 11:09:31.564897 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerID="2132af742e53d9d08083cc335b4222b41afd7fcbcecb2bbd86a5624917def2a7" exitCode=0 Jan 09 11:09:31 crc kubenswrapper[4727]: I0109 11:09:31.565187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerDied","Data":"2132af742e53d9d08083cc335b4222b41afd7fcbcecb2bbd86a5624917def2a7"} Jan 09 11:09:31 crc kubenswrapper[4727]: I0109 11:09:31.567534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a49793da-9c08-47ea-892e-fe9e5b16d309","Type":"ContainerStarted","Data":"1c2f8ec07af4960828b3ab65dbd5b3a0ee3b340d21805a2f399fb9e5c66ecda7"} Jan 09 11:09:31 crc kubenswrapper[4727]: I0109 11:09:31.569050 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9","Type":"ContainerStarted","Data":"6e03fb8f18f09c152e786359641442571a6301ed4efc4901838ff5afd287285b"} Jan 09 11:09:32 crc kubenswrapper[4727]: I0109 11:09:32.585919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerStarted","Data":"390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9"} Jan 09 11:09:32 crc kubenswrapper[4727]: I0109 11:09:32.613916 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" podStartSLOduration=3.613886741 podStartE2EDuration="3.613886741s" podCreationTimestamp="2026-01-09 11:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:09:32.609593589 +0000 UTC m=+1418.059498380" watchObservedRunningTime="2026-01-09 11:09:32.613886741 +0000 UTC m=+1418.063791522" Jan 09 11:09:33 crc kubenswrapper[4727]: I0109 11:09:33.289207 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2a6a64ec-e743-4fa7-8e3e-5f628ebeea60" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: i/o timeout" Jan 09 11:09:33 crc kubenswrapper[4727]: I0109 11:09:33.602320 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:36 crc kubenswrapper[4727]: I0109 11:09:36.813566 4727 scope.go:117] "RemoveContainer" containerID="4e6882c4f32dec9e5098ba742e2c34d151d018e9f63b15aa14f663a278aa1af0" Jan 09 11:09:39 crc kubenswrapper[4727]: I0109 11:09:39.741254 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:39 crc kubenswrapper[4727]: I0109 11:09:39.822962 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:09:39 crc kubenswrapper[4727]: I0109 11:09:39.823457 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" containerID="cri-o://5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6" gracePeriod=10 Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.033745 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-j4b5d"] Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.038911 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.054648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-j4b5d"] Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.140713 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.152721 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.152798 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153066 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-config\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153150 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cklng\" (UniqueName: \"kubernetes.io/projected/95c81071-440f-4823-8240-dfd215cdf314-kube-api-access-cklng\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153195 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153340 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.153423 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256273 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-config\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cklng\" (UniqueName: \"kubernetes.io/projected/95c81071-440f-4823-8240-dfd215cdf314-kube-api-access-cklng\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256405 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.256465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.257311 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.257339 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-config\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.257944 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.258044 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.258043 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.258297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c81071-440f-4823-8240-dfd215cdf314-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.279835 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cklng\" (UniqueName: \"kubernetes.io/projected/95c81071-440f-4823-8240-dfd215cdf314-kube-api-access-cklng\") pod \"dnsmasq-dns-cb6ffcf87-j4b5d\" (UID: \"95c81071-440f-4823-8240-dfd215cdf314\") " pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.365190 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.690910 4727 generic.go:334] "Generic (PLEG): container finished" podID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerID="5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6" exitCode=0 Jan 09 11:09:40 crc kubenswrapper[4727]: I0109 11:09:40.690979 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerDied","Data":"5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6"} Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:40.969420 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-j4b5d"] Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.299126 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.413170 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.413632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.413751 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.413985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.414010 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.414577 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") pod \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\" (UID: \"0aa41a67-4a03-4479-8296-e3e0b3242cc6\") " Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.419884 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml" (OuterVolumeSpecName: "kube-api-access-g9xml") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "kube-api-access-g9xml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.472221 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.472711 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.478168 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config" (OuterVolumeSpecName: "config") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.491678 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.496502 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0aa41a67-4a03-4479-8296-e3e0b3242cc6" (UID: "0aa41a67-4a03-4479-8296-e3e0b3242cc6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.516947 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.516983 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.517000 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.517076 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.517095 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0aa41a67-4a03-4479-8296-e3e0b3242cc6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.517109 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9xml\" (UniqueName: \"kubernetes.io/projected/0aa41a67-4a03-4479-8296-e3e0b3242cc6-kube-api-access-g9xml\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.703988 4727 generic.go:334] "Generic (PLEG): container finished" podID="95c81071-440f-4823-8240-dfd215cdf314" containerID="dcab4742464c8a7ea97ad83510fd8fc8fd047c920ce480909a4163ed2605b779" exitCode=0 Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.704104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" event={"ID":"95c81071-440f-4823-8240-dfd215cdf314","Type":"ContainerDied","Data":"dcab4742464c8a7ea97ad83510fd8fc8fd047c920ce480909a4163ed2605b779"} Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.704177 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" event={"ID":"95c81071-440f-4823-8240-dfd215cdf314","Type":"ContainerStarted","Data":"4f82a93e345e376468c95125706af6ab6b5438b8bb1a6593cdada3863380e9f4"} Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.708009 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" event={"ID":"0aa41a67-4a03-4479-8296-e3e0b3242cc6","Type":"ContainerDied","Data":"4c3c5656ab7740ee585b02abc7ff96c0fcb25905f3c3cef4df25c6d92b13bf96"} Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.708104 4727 scope.go:117] "RemoveContainer" containerID="5fedb2ff35997a343ee6a457e8731c2daeaa887188907a14994676a6039978a6" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.708041 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-dsdfn" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.740564 4727 scope.go:117] "RemoveContainer" containerID="9c4c8b98157f83d68ea66f336ad75ea1176dca583b8fa920a9e02cc7a8302972" Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.760569 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:09:41 crc kubenswrapper[4727]: I0109 11:09:41.775460 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-dsdfn"] Jan 09 11:09:42 crc kubenswrapper[4727]: I0109 11:09:42.720375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" event={"ID":"95c81071-440f-4823-8240-dfd215cdf314","Type":"ContainerStarted","Data":"9feb824d2893efaac9f51a8f33da94a335567db8ced1be6de7ddf9ca1287c63b"} Jan 09 11:09:42 crc kubenswrapper[4727]: I0109 11:09:42.721522 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:42 crc kubenswrapper[4727]: I0109 11:09:42.751485 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" podStartSLOduration=3.751465365 podStartE2EDuration="3.751465365s" podCreationTimestamp="2026-01-09 11:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:09:42.742354563 +0000 UTC m=+1428.192259374" watchObservedRunningTime="2026-01-09 11:09:42.751465365 +0000 UTC m=+1428.201370176" Jan 09 11:09:42 crc kubenswrapper[4727]: I0109 11:09:42.873101 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" path="/var/lib/kubelet/pods/0aa41a67-4a03-4479-8296-e3e0b3242cc6/volumes" Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.367594 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-j4b5d" Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.460170 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.460661 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="dnsmasq-dns" containerID="cri-o://390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9" gracePeriod=10 Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.822573 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerID="390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9" exitCode=0 Jan 09 11:09:50 crc kubenswrapper[4727]: I0109 11:09:50.822623 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerDied","Data":"390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9"} Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.036683 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170256 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170390 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170491 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170552 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170604 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170804 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.170833 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") pod \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\" (UID: \"c1b5611d-b2c2-4a4a-897c-7f37995529cd\") " Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.185302 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9" (OuterVolumeSpecName: "kube-api-access-dgcx9") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "kube-api-access-dgcx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.224286 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.226988 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.237409 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.238274 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config" (OuterVolumeSpecName: "config") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.240797 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.248989 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c1b5611d-b2c2-4a4a-897c-7f37995529cd" (UID: "c1b5611d-b2c2-4a4a-897c-7f37995529cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274556 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274601 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274614 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274626 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274637 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274649 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgcx9\" (UniqueName: \"kubernetes.io/projected/c1b5611d-b2c2-4a4a-897c-7f37995529cd-kube-api-access-dgcx9\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.274662 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1b5611d-b2c2-4a4a-897c-7f37995529cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.838268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" event={"ID":"c1b5611d-b2c2-4a4a-897c-7f37995529cd","Type":"ContainerDied","Data":"b23db93dd41a6a3be55664c0b3ea8515aa8bb592b86dbca11e469e265d68b4ed"} Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.838355 4727 scope.go:117] "RemoveContainer" containerID="390ae1f0410cb9c95c184fe7f6eab98ead48a1b54abc58256bebd130ea5ecac9" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.838378 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4srmt" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.874862 4727 scope.go:117] "RemoveContainer" containerID="2132af742e53d9d08083cc335b4222b41afd7fcbcecb2bbd86a5624917def2a7" Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.885229 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:51 crc kubenswrapper[4727]: I0109 11:09:51.901118 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4srmt"] Jan 09 11:09:52 crc kubenswrapper[4727]: I0109 11:09:52.884811 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" path="/var/lib/kubelet/pods/c1b5611d-b2c2-4a4a-897c-7f37995529cd/volumes" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.566102 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv"] Jan 09 11:10:03 crc kubenswrapper[4727]: E0109 11:10:03.567880 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.567907 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: E0109 11:10:03.567939 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="init" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.567951 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="init" Jan 09 11:10:03 crc kubenswrapper[4727]: E0109 11:10:03.567982 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="init" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.567995 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="init" Jan 09 11:10:03 crc kubenswrapper[4727]: E0109 11:10:03.568035 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.568047 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.568384 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b5611d-b2c2-4a4a-897c-7f37995529cd" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.568405 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa41a67-4a03-4479-8296-e3e0b3242cc6" containerName="dnsmasq-dns" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.569556 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.574947 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.575768 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.576302 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.576958 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.581273 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv"] Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.689627 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.689872 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.689917 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.690310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.792285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.792475 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.792547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.792688 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.803808 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.805437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.805453 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.821250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.901862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.993990 4727 generic.go:334] "Generic (PLEG): container finished" podID="a49793da-9c08-47ea-892e-fe9e5b16d309" containerID="1c2f8ec07af4960828b3ab65dbd5b3a0ee3b340d21805a2f399fb9e5c66ecda7" exitCode=0 Jan 09 11:10:03 crc kubenswrapper[4727]: I0109 11:10:03.994084 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a49793da-9c08-47ea-892e-fe9e5b16d309","Type":"ContainerDied","Data":"1c2f8ec07af4960828b3ab65dbd5b3a0ee3b340d21805a2f399fb9e5c66ecda7"} Jan 09 11:10:04 crc kubenswrapper[4727]: I0109 11:10:04.005695 4727 generic.go:334] "Generic (PLEG): container finished" podID="bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9" containerID="6e03fb8f18f09c152e786359641442571a6301ed4efc4901838ff5afd287285b" exitCode=0 Jan 09 11:10:04 crc kubenswrapper[4727]: I0109 11:10:04.005742 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9","Type":"ContainerDied","Data":"6e03fb8f18f09c152e786359641442571a6301ed4efc4901838ff5afd287285b"} Jan 09 11:10:04 crc kubenswrapper[4727]: I0109 11:10:04.514624 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv"] Jan 09 11:10:04 crc kubenswrapper[4727]: W0109 11:10:04.527137 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9bcc7e6_29a0_4902_a4be_2ea8e0a1f1a1.slice/crio-6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7 WatchSource:0}: Error finding container 6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7: Status 404 returned error can't find the container with id 6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7 Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.018801 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a49793da-9c08-47ea-892e-fe9e5b16d309","Type":"ContainerStarted","Data":"606cf56153fe3380f0b1856793e7fdcc53f5f1215d67f935b5ae6b7ee10f0076"} Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.019753 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.021977 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" event={"ID":"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1","Type":"ContainerStarted","Data":"6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7"} Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.024175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9","Type":"ContainerStarted","Data":"00dae8f5c467ff57ac54c923d0e2b2416daf17f9c3978b1a4385201660a138b9"} Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.035952 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.074177 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.074156299 podStartE2EDuration="37.074156299s" podCreationTimestamp="2026-01-09 11:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:10:05.067869699 +0000 UTC m=+1450.517774490" watchObservedRunningTime="2026-01-09 11:10:05.074156299 +0000 UTC m=+1450.524061080" Jan 09 11:10:05 crc kubenswrapper[4727]: I0109 11:10:05.098688 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.098667275 podStartE2EDuration="37.098667275s" podCreationTimestamp="2026-01-09 11:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:10:05.095031741 +0000 UTC m=+1450.544936532" watchObservedRunningTime="2026-01-09 11:10:05.098667275 +0000 UTC m=+1450.548572056" Jan 09 11:10:16 crc kubenswrapper[4727]: I0109 11:10:16.184328 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" event={"ID":"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1","Type":"ContainerStarted","Data":"1f928fbedfb7bc8b275b06147ed533d3c4294ae75fede066f6997462f74c7c3d"} Jan 09 11:10:16 crc kubenswrapper[4727]: I0109 11:10:16.208249 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" podStartSLOduration=1.975981945 podStartE2EDuration="13.208225745s" podCreationTimestamp="2026-01-09 11:10:03 +0000 UTC" firstStartedPulling="2026-01-09 11:10:04.530118395 +0000 UTC m=+1449.980023166" lastFinishedPulling="2026-01-09 11:10:15.762362195 +0000 UTC m=+1461.212266966" observedRunningTime="2026-01-09 11:10:16.203154605 +0000 UTC m=+1461.653059406" watchObservedRunningTime="2026-01-09 11:10:16.208225745 +0000 UTC m=+1461.658130536" Jan 09 11:10:18 crc kubenswrapper[4727]: I0109 11:10:18.983767 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 09 11:10:19 crc kubenswrapper[4727]: I0109 11:10:19.297870 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 09 11:10:28 crc kubenswrapper[4727]: I0109 11:10:28.309191 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" containerID="1f928fbedfb7bc8b275b06147ed533d3c4294ae75fede066f6997462f74c7c3d" exitCode=0 Jan 09 11:10:28 crc kubenswrapper[4727]: I0109 11:10:28.309314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" event={"ID":"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1","Type":"ContainerDied","Data":"1f928fbedfb7bc8b275b06147ed533d3c4294ae75fede066f6997462f74c7c3d"} Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.761454 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.827493 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") pod \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.827979 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") pod \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.828154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") pod \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.828202 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") pod \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\" (UID: \"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1\") " Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.834952 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v" (OuterVolumeSpecName: "kube-api-access-kxk9v") pod "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" (UID: "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1"). InnerVolumeSpecName "kube-api-access-kxk9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.835446 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" (UID: "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.860568 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" (UID: "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.861846 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory" (OuterVolumeSpecName: "inventory") pod "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" (UID: "d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.931080 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.931125 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.931137 4727 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:29 crc kubenswrapper[4727]: I0109 11:10:29.931148 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxk9v\" (UniqueName: \"kubernetes.io/projected/d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1-kube-api-access-kxk9v\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.332527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" event={"ID":"d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1","Type":"ContainerDied","Data":"6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7"} Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.332582 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6ab3373292deecdc88151bf1982f5bc36b1883696147d44c215651aa1241a7" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.332644 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.430379 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm"] Jan 09 11:10:30 crc kubenswrapper[4727]: E0109 11:10:30.431038 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.431287 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.431608 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.432962 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.436321 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.436475 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.437719 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.437924 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.441040 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.441114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.441154 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.444683 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm"] Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.543565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.543821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.543887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.548976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.553033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.563156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4zggm\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:30 crc kubenswrapper[4727]: I0109 11:10:30.770139 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:31 crc kubenswrapper[4727]: I0109 11:10:31.333359 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm"] Jan 09 11:10:32 crc kubenswrapper[4727]: I0109 11:10:32.359984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" event={"ID":"ce764242-0f23-4580-87ee-9f0f2f81fb0e","Type":"ContainerStarted","Data":"83fb2bc948a64679249e916f95d25d9ad6f941205a63e1952138f0b5c8da938a"} Jan 09 11:10:32 crc kubenswrapper[4727]: I0109 11:10:32.360560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" event={"ID":"ce764242-0f23-4580-87ee-9f0f2f81fb0e","Type":"ContainerStarted","Data":"d99a589f1d2bfa22b2f784d0d7a073457bbd7ffa76fa9e62ec52a87a536d9911"} Jan 09 11:10:32 crc kubenswrapper[4727]: I0109 11:10:32.381314 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" podStartSLOduration=1.845153318 podStartE2EDuration="2.381287691s" podCreationTimestamp="2026-01-09 11:10:30 +0000 UTC" firstStartedPulling="2026-01-09 11:10:31.357086072 +0000 UTC m=+1476.806990853" lastFinishedPulling="2026-01-09 11:10:31.893220395 +0000 UTC m=+1477.343125226" observedRunningTime="2026-01-09 11:10:32.378293424 +0000 UTC m=+1477.828198225" watchObservedRunningTime="2026-01-09 11:10:32.381287691 +0000 UTC m=+1477.831192492" Jan 09 11:10:35 crc kubenswrapper[4727]: I0109 11:10:35.396146 4727 generic.go:334] "Generic (PLEG): container finished" podID="ce764242-0f23-4580-87ee-9f0f2f81fb0e" containerID="83fb2bc948a64679249e916f95d25d9ad6f941205a63e1952138f0b5c8da938a" exitCode=0 Jan 09 11:10:35 crc kubenswrapper[4727]: I0109 11:10:35.396288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" event={"ID":"ce764242-0f23-4580-87ee-9f0f2f81fb0e","Type":"ContainerDied","Data":"83fb2bc948a64679249e916f95d25d9ad6f941205a63e1952138f0b5c8da938a"} Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.822453 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.908758 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") pod \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.908877 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") pod \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.908926 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") pod \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\" (UID: \"ce764242-0f23-4580-87ee-9f0f2f81fb0e\") " Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.916937 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt" (OuterVolumeSpecName: "kube-api-access-m9smt") pod "ce764242-0f23-4580-87ee-9f0f2f81fb0e" (UID: "ce764242-0f23-4580-87ee-9f0f2f81fb0e"). InnerVolumeSpecName "kube-api-access-m9smt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.940404 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce764242-0f23-4580-87ee-9f0f2f81fb0e" (UID: "ce764242-0f23-4580-87ee-9f0f2f81fb0e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.940808 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory" (OuterVolumeSpecName: "inventory") pod "ce764242-0f23-4580-87ee-9f0f2f81fb0e" (UID: "ce764242-0f23-4580-87ee-9f0f2f81fb0e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:10:36 crc kubenswrapper[4727]: I0109 11:10:36.975962 4727 scope.go:117] "RemoveContainer" containerID="5456968a5bb394405d1937902e90ca9c687f3ec8600257fc65b14f86f0be1050" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.012009 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.012048 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9smt\" (UniqueName: \"kubernetes.io/projected/ce764242-0f23-4580-87ee-9f0f2f81fb0e-kube-api-access-m9smt\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.012063 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce764242-0f23-4580-87ee-9f0f2f81fb0e-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.028686 4727 scope.go:117] "RemoveContainer" containerID="508aae6e73476bd7d8554f7bf79128adfc2937e36453761ce5d6c273144e8c65" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.085431 4727 scope.go:117] "RemoveContainer" containerID="aaf2a92e3a5d89ba3eacf1abbc6c991d4370be4c694455772f2202d7a23e7cb9" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.111670 4727 scope.go:117] "RemoveContainer" containerID="9684f510a2931cd79a1a34ffd5acdf9db329d2f059862bc3a498860e5df62633" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.137899 4727 scope.go:117] "RemoveContainer" containerID="978d1d0639986a01c899167d3627f579f640a9ec16babb304f6a9c41d9381181" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.425093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" event={"ID":"ce764242-0f23-4580-87ee-9f0f2f81fb0e","Type":"ContainerDied","Data":"d99a589f1d2bfa22b2f784d0d7a073457bbd7ffa76fa9e62ec52a87a536d9911"} Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.425708 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99a589f1d2bfa22b2f784d0d7a073457bbd7ffa76fa9e62ec52a87a536d9911" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.425176 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4zggm" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.496744 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc"] Jan 09 11:10:37 crc kubenswrapper[4727]: E0109 11:10:37.497147 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce764242-0f23-4580-87ee-9f0f2f81fb0e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.497167 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce764242-0f23-4580-87ee-9f0f2f81fb0e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.497388 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce764242-0f23-4580-87ee-9f0f2f81fb0e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.498083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.500745 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.500808 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.501171 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.503927 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.516655 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc"] Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.625379 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.625757 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.626752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.626905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.729846 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.730201 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.730343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.730495 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.736181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.736181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.737285 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.761050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:37 crc kubenswrapper[4727]: I0109 11:10:37.873251 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:10:38 crc kubenswrapper[4727]: I0109 11:10:38.442979 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc"] Jan 09 11:10:39 crc kubenswrapper[4727]: I0109 11:10:39.446872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" event={"ID":"23e25abc-b16a-4273-846e-7fab7ef1a095","Type":"ContainerStarted","Data":"422ebdc6dd6112f3e20a548d3f702db80a12d85c42b72dbbf30001fd9874275e"} Jan 09 11:10:39 crc kubenswrapper[4727]: I0109 11:10:39.447699 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" event={"ID":"23e25abc-b16a-4273-846e-7fab7ef1a095","Type":"ContainerStarted","Data":"e186a8e419b648f807121156f384a6dd0b31f821e18f771ed7229a01613aa47f"} Jan 09 11:10:39 crc kubenswrapper[4727]: I0109 11:10:39.471643 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" podStartSLOduration=2.017110885 podStartE2EDuration="2.471616745s" podCreationTimestamp="2026-01-09 11:10:37 +0000 UTC" firstStartedPulling="2026-01-09 11:10:38.451580442 +0000 UTC m=+1483.901485233" lastFinishedPulling="2026-01-09 11:10:38.906086302 +0000 UTC m=+1484.355991093" observedRunningTime="2026-01-09 11:10:39.462723917 +0000 UTC m=+1484.912628708" watchObservedRunningTime="2026-01-09 11:10:39.471616745 +0000 UTC m=+1484.921521526" Jan 09 11:11:09 crc kubenswrapper[4727]: I0109 11:11:09.405737 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:11:09 crc kubenswrapper[4727]: I0109 11:11:09.406729 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:11:37 crc kubenswrapper[4727]: I0109 11:11:37.318264 4727 scope.go:117] "RemoveContainer" containerID="afad1c35a086c45b0d71f6a0dcf1c838896cbf238adf7d23705b1d81b1e0c5fd" Jan 09 11:11:39 crc kubenswrapper[4727]: I0109 11:11:39.405385 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:11:39 crc kubenswrapper[4727]: I0109 11:11:39.405968 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.268235 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.271693 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.278011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.315549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.333956 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.335034 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.437247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.437385 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.437461 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.438229 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.438503 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.463033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") pod \"certified-operators-mnlpw\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:46 crc kubenswrapper[4727]: I0109 11:11:46.631608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:47 crc kubenswrapper[4727]: I0109 11:11:47.208427 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:11:48 crc kubenswrapper[4727]: I0109 11:11:48.206456 4727 generic.go:334] "Generic (PLEG): container finished" podID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerID="7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b" exitCode=0 Jan 09 11:11:48 crc kubenswrapper[4727]: I0109 11:11:48.206570 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerDied","Data":"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b"} Jan 09 11:11:48 crc kubenswrapper[4727]: I0109 11:11:48.207739 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerStarted","Data":"c6761630a27b118fa7a1b8ffc3af0856cbb08e875418e32660c39fd050836633"} Jan 09 11:11:49 crc kubenswrapper[4727]: I0109 11:11:49.220272 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerStarted","Data":"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d"} Jan 09 11:11:50 crc kubenswrapper[4727]: I0109 11:11:50.235412 4727 generic.go:334] "Generic (PLEG): container finished" podID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerID="a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d" exitCode=0 Jan 09 11:11:50 crc kubenswrapper[4727]: I0109 11:11:50.235553 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerDied","Data":"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d"} Jan 09 11:11:51 crc kubenswrapper[4727]: I0109 11:11:51.248690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerStarted","Data":"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2"} Jan 09 11:11:51 crc kubenswrapper[4727]: I0109 11:11:51.272837 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mnlpw" podStartSLOduration=2.810798393 podStartE2EDuration="5.272794295s" podCreationTimestamp="2026-01-09 11:11:46 +0000 UTC" firstStartedPulling="2026-01-09 11:11:48.209265423 +0000 UTC m=+1553.659170204" lastFinishedPulling="2026-01-09 11:11:50.671261325 +0000 UTC m=+1556.121166106" observedRunningTime="2026-01-09 11:11:51.269689796 +0000 UTC m=+1556.719594587" watchObservedRunningTime="2026-01-09 11:11:51.272794295 +0000 UTC m=+1556.722699076" Jan 09 11:11:56 crc kubenswrapper[4727]: I0109 11:11:56.632195 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:56 crc kubenswrapper[4727]: I0109 11:11:56.633158 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:56 crc kubenswrapper[4727]: I0109 11:11:56.684866 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:57 crc kubenswrapper[4727]: I0109 11:11:57.367599 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:57 crc kubenswrapper[4727]: I0109 11:11:57.421687 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.335830 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mnlpw" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="registry-server" containerID="cri-o://5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" gracePeriod=2 Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.839209 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.970053 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") pod \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.970679 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") pod \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.970900 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities" (OuterVolumeSpecName: "utilities") pod "1235df16-02a9-4ac7-b8e2-d3411d65c5cd" (UID: "1235df16-02a9-4ac7-b8e2-d3411d65c5cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.971050 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") pod \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\" (UID: \"1235df16-02a9-4ac7-b8e2-d3411d65c5cd\") " Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.972204 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:11:59 crc kubenswrapper[4727]: I0109 11:11:59.977178 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg" (OuterVolumeSpecName: "kube-api-access-pf6tg") pod "1235df16-02a9-4ac7-b8e2-d3411d65c5cd" (UID: "1235df16-02a9-4ac7-b8e2-d3411d65c5cd"). InnerVolumeSpecName "kube-api-access-pf6tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.074724 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf6tg\" (UniqueName: \"kubernetes.io/projected/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-kube-api-access-pf6tg\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351473 4727 generic.go:334] "Generic (PLEG): container finished" podID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerID="5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" exitCode=0 Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351542 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerDied","Data":"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2"} Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351601 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlpw" event={"ID":"1235df16-02a9-4ac7-b8e2-d3411d65c5cd","Type":"ContainerDied","Data":"c6761630a27b118fa7a1b8ffc3af0856cbb08e875418e32660c39fd050836633"} Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351630 4727 scope.go:117] "RemoveContainer" containerID="5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.351563 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlpw" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.375525 4727 scope.go:117] "RemoveContainer" containerID="a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.397368 4727 scope.go:117] "RemoveContainer" containerID="7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.423464 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1235df16-02a9-4ac7-b8e2-d3411d65c5cd" (UID: "1235df16-02a9-4ac7-b8e2-d3411d65c5cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.440925 4727 scope.go:117] "RemoveContainer" containerID="5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" Jan 09 11:12:00 crc kubenswrapper[4727]: E0109 11:12:00.441404 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2\": container with ID starting with 5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2 not found: ID does not exist" containerID="5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.441440 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2"} err="failed to get container status \"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2\": rpc error: code = NotFound desc = could not find container \"5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2\": container with ID starting with 5c3104386cc86e8e7bf7982e452335828a3209c375d9b5a0687b363f3187e3d2 not found: ID does not exist" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.441463 4727 scope.go:117] "RemoveContainer" containerID="a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d" Jan 09 11:12:00 crc kubenswrapper[4727]: E0109 11:12:00.441962 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d\": container with ID starting with a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d not found: ID does not exist" containerID="a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.441986 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d"} err="failed to get container status \"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d\": rpc error: code = NotFound desc = could not find container \"a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d\": container with ID starting with a908c9015e9fb050c5d54967d9d492fa33f6fed5cf42b491e63e2af212d90d4d not found: ID does not exist" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.441998 4727 scope.go:117] "RemoveContainer" containerID="7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b" Jan 09 11:12:00 crc kubenswrapper[4727]: E0109 11:12:00.442433 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b\": container with ID starting with 7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b not found: ID does not exist" containerID="7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.442460 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b"} err="failed to get container status \"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b\": rpc error: code = NotFound desc = could not find container \"7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b\": container with ID starting with 7951150aa5128569a9e412131df9ecf3e71fcb9b8ebd4bc624c9ecb03f84777b not found: ID does not exist" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.484117 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1235df16-02a9-4ac7-b8e2-d3411d65c5cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.699108 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.708725 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mnlpw"] Jan 09 11:12:00 crc kubenswrapper[4727]: I0109 11:12:00.875988 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" path="/var/lib/kubelet/pods/1235df16-02a9-4ac7-b8e2-d3411d65c5cd/volumes" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.755658 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fbk2g"] Jan 09 11:12:06 crc kubenswrapper[4727]: E0109 11:12:06.757356 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="extract-utilities" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.757379 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="extract-utilities" Jan 09 11:12:06 crc kubenswrapper[4727]: E0109 11:12:06.757400 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="registry-server" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.757429 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="registry-server" Jan 09 11:12:06 crc kubenswrapper[4727]: E0109 11:12:06.757459 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="extract-content" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.757465 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="extract-content" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.757920 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1235df16-02a9-4ac7-b8e2-d3411d65c5cd" containerName="registry-server" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.759880 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.767798 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbk2g"] Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.826691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-catalog-content\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.826927 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swsrp\" (UniqueName: \"kubernetes.io/projected/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-kube-api-access-swsrp\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.827410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-utilities\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-catalog-content\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929445 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swsrp\" (UniqueName: \"kubernetes.io/projected/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-kube-api-access-swsrp\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929572 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-utilities\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-catalog-content\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.929930 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-utilities\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:06 crc kubenswrapper[4727]: I0109 11:12:06.953836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swsrp\" (UniqueName: \"kubernetes.io/projected/5045256f-167a-4bdd-b1dc-3b052bbdfeb6-kube-api-access-swsrp\") pod \"community-operators-fbk2g\" (UID: \"5045256f-167a-4bdd-b1dc-3b052bbdfeb6\") " pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:07 crc kubenswrapper[4727]: I0109 11:12:07.088462 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:07 crc kubenswrapper[4727]: I0109 11:12:07.652167 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbk2g"] Jan 09 11:12:08 crc kubenswrapper[4727]: I0109 11:12:08.430571 4727 generic.go:334] "Generic (PLEG): container finished" podID="5045256f-167a-4bdd-b1dc-3b052bbdfeb6" containerID="2caca0541fe47929e16217e797d21ae7809a50fd1a6f0f5f9a4e867fd53bbaad" exitCode=0 Jan 09 11:12:08 crc kubenswrapper[4727]: I0109 11:12:08.430677 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbk2g" event={"ID":"5045256f-167a-4bdd-b1dc-3b052bbdfeb6","Type":"ContainerDied","Data":"2caca0541fe47929e16217e797d21ae7809a50fd1a6f0f5f9a4e867fd53bbaad"} Jan 09 11:12:08 crc kubenswrapper[4727]: I0109 11:12:08.430884 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbk2g" event={"ID":"5045256f-167a-4bdd-b1dc-3b052bbdfeb6","Type":"ContainerStarted","Data":"4a38c7f026728d8816fe27304b7755fb62283693bf4673f19989f176ce1efc58"} Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.404499 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.404572 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.404613 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.405281 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:12:09 crc kubenswrapper[4727]: I0109 11:12:09.405327 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" gracePeriod=600 Jan 09 11:12:10 crc kubenswrapper[4727]: E0109 11:12:10.061962 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:10 crc kubenswrapper[4727]: I0109 11:12:10.473126 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" exitCode=0 Jan 09 11:12:10 crc kubenswrapper[4727]: I0109 11:12:10.473188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019"} Jan 09 11:12:10 crc kubenswrapper[4727]: I0109 11:12:10.473248 4727 scope.go:117] "RemoveContainer" containerID="02ac79a04d63ff7c30153421b85a51d152efcc3a8aa44f97a3a362a2e8bde81a" Jan 09 11:12:10 crc kubenswrapper[4727]: I0109 11:12:10.474113 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:12:10 crc kubenswrapper[4727]: E0109 11:12:10.474522 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:21 crc kubenswrapper[4727]: E0109 11:12:21.005066 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 09 11:12:21 crc kubenswrapper[4727]: E0109 11:12:21.005936 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swsrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fbk2g_openshift-marketplace(5045256f-167a-4bdd-b1dc-3b052bbdfeb6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 09 11:12:21 crc kubenswrapper[4727]: E0109 11:12:21.007146 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fbk2g" podUID="5045256f-167a-4bdd-b1dc-3b052bbdfeb6" Jan 09 11:12:21 crc kubenswrapper[4727]: E0109 11:12:21.606466 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fbk2g" podUID="5045256f-167a-4bdd-b1dc-3b052bbdfeb6" Jan 09 11:12:23 crc kubenswrapper[4727]: I0109 11:12:23.860079 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:12:23 crc kubenswrapper[4727]: E0109 11:12:23.860874 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:34 crc kubenswrapper[4727]: I0109 11:12:34.885256 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:12:34 crc kubenswrapper[4727]: E0109 11:12:34.886613 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:35 crc kubenswrapper[4727]: I0109 11:12:35.755625 4727 generic.go:334] "Generic (PLEG): container finished" podID="5045256f-167a-4bdd-b1dc-3b052bbdfeb6" containerID="2b73d9986017db6134722c854a83c36d6db3cb027749a3a9499c889eb762b36a" exitCode=0 Jan 09 11:12:35 crc kubenswrapper[4727]: I0109 11:12:35.755681 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbk2g" event={"ID":"5045256f-167a-4bdd-b1dc-3b052bbdfeb6","Type":"ContainerDied","Data":"2b73d9986017db6134722c854a83c36d6db3cb027749a3a9499c889eb762b36a"} Jan 09 11:12:37 crc kubenswrapper[4727]: I0109 11:12:37.778005 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbk2g" event={"ID":"5045256f-167a-4bdd-b1dc-3b052bbdfeb6","Type":"ContainerStarted","Data":"e16ebeb855e655dfd97d784948801190654cdf0593bf3358f79163637b067f1d"} Jan 09 11:12:37 crc kubenswrapper[4727]: I0109 11:12:37.813693 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fbk2g" podStartSLOduration=3.791063741 podStartE2EDuration="31.813655846s" podCreationTimestamp="2026-01-09 11:12:06 +0000 UTC" firstStartedPulling="2026-01-09 11:12:08.433070048 +0000 UTC m=+1573.882974829" lastFinishedPulling="2026-01-09 11:12:36.455662163 +0000 UTC m=+1601.905566934" observedRunningTime="2026-01-09 11:12:37.802256562 +0000 UTC m=+1603.252161343" watchObservedRunningTime="2026-01-09 11:12:37.813655846 +0000 UTC m=+1603.263560627" Jan 09 11:12:47 crc kubenswrapper[4727]: I0109 11:12:47.088743 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:47 crc kubenswrapper[4727]: I0109 11:12:47.089359 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:47 crc kubenswrapper[4727]: I0109 11:12:47.162410 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:47 crc kubenswrapper[4727]: I0109 11:12:47.977615 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fbk2g" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.063739 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbk2g"] Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.139252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.139635 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9rsdw" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="registry-server" containerID="cri-o://1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" gracePeriod=2 Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.619289 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.792246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") pod \"9f453764-5e7d-441d-90d0-c96ae96597ef\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.792386 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") pod \"9f453764-5e7d-441d-90d0-c96ae96597ef\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.792466 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") pod \"9f453764-5e7d-441d-90d0-c96ae96597ef\" (UID: \"9f453764-5e7d-441d-90d0-c96ae96597ef\") " Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.793095 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities" (OuterVolumeSpecName: "utilities") pod "9f453764-5e7d-441d-90d0-c96ae96597ef" (UID: "9f453764-5e7d-441d-90d0-c96ae96597ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.800610 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st" (OuterVolumeSpecName: "kube-api-access-z79st") pod "9f453764-5e7d-441d-90d0-c96ae96597ef" (UID: "9f453764-5e7d-441d-90d0-c96ae96597ef"). InnerVolumeSpecName "kube-api-access-z79st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.840715 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f453764-5e7d-441d-90d0-c96ae96597ef" (UID: "9f453764-5e7d-441d-90d0-c96ae96597ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.861977 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:12:48 crc kubenswrapper[4727]: E0109 11:12:48.862325 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.894864 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z79st\" (UniqueName: \"kubernetes.io/projected/9f453764-5e7d-441d-90d0-c96ae96597ef-kube-api-access-z79st\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.894889 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.894899 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f453764-5e7d-441d-90d0-c96ae96597ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939363 4727 generic.go:334] "Generic (PLEG): container finished" podID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerID="1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" exitCode=0 Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939455 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerDied","Data":"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49"} Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939542 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rsdw" event={"ID":"9f453764-5e7d-441d-90d0-c96ae96597ef","Type":"ContainerDied","Data":"357891722b37e84c5d6696b58f957606ce91311ffc64133377aa8cf62644c51c"} Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939569 4727 scope.go:117] "RemoveContainer" containerID="1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.939877 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rsdw" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.965497 4727 scope.go:117] "RemoveContainer" containerID="c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f" Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.975622 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 11:12:48 crc kubenswrapper[4727]: I0109 11:12:48.985817 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9rsdw"] Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.020712 4727 scope.go:117] "RemoveContainer" containerID="45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.053653 4727 scope.go:117] "RemoveContainer" containerID="1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" Jan 09 11:12:49 crc kubenswrapper[4727]: E0109 11:12:49.054960 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49\": container with ID starting with 1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49 not found: ID does not exist" containerID="1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.055031 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49"} err="failed to get container status \"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49\": rpc error: code = NotFound desc = could not find container \"1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49\": container with ID starting with 1e6d063adc7cb5f66dd7be4bbcbf9da35a85065e06ff77e3afc8593f73b17f49 not found: ID does not exist" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.055079 4727 scope.go:117] "RemoveContainer" containerID="c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f" Jan 09 11:12:49 crc kubenswrapper[4727]: E0109 11:12:49.055848 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f\": container with ID starting with c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f not found: ID does not exist" containerID="c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.055883 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f"} err="failed to get container status \"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f\": rpc error: code = NotFound desc = could not find container \"c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f\": container with ID starting with c9fedf5a3aa32ca0565090cc373d92bd9d6b96d5adab76dfd59e7f760440289f not found: ID does not exist" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.055903 4727 scope.go:117] "RemoveContainer" containerID="45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc" Jan 09 11:12:49 crc kubenswrapper[4727]: E0109 11:12:49.056286 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc\": container with ID starting with 45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc not found: ID does not exist" containerID="45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc" Jan 09 11:12:49 crc kubenswrapper[4727]: I0109 11:12:49.056331 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc"} err="failed to get container status \"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc\": rpc error: code = NotFound desc = could not find container \"45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc\": container with ID starting with 45cb3d6f2005794d1ae490ccd4e058d1d4d118d2879f13b740ca83fe6efc21cc not found: ID does not exist" Jan 09 11:12:50 crc kubenswrapper[4727]: I0109 11:12:50.874368 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" path="/var/lib/kubelet/pods/9f453764-5e7d-441d-90d0-c96ae96597ef/volumes" Jan 09 11:13:02 crc kubenswrapper[4727]: I0109 11:13:02.860188 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:02 crc kubenswrapper[4727]: E0109 11:13:02.860989 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:13:13 crc kubenswrapper[4727]: I0109 11:13:13.860587 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:13 crc kubenswrapper[4727]: E0109 11:13:13.861585 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:13:24 crc kubenswrapper[4727]: I0109 11:13:24.871159 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:24 crc kubenswrapper[4727]: E0109 11:13:24.872651 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:13:37 crc kubenswrapper[4727]: I0109 11:13:37.860595 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:37 crc kubenswrapper[4727]: E0109 11:13:37.863253 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:13:50 crc kubenswrapper[4727]: I0109 11:13:50.861031 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:13:50 crc kubenswrapper[4727]: E0109 11:13:50.862207 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:01 crc kubenswrapper[4727]: I0109 11:14:01.732938 4727 generic.go:334] "Generic (PLEG): container finished" podID="23e25abc-b16a-4273-846e-7fab7ef1a095" containerID="422ebdc6dd6112f3e20a548d3f702db80a12d85c42b72dbbf30001fd9874275e" exitCode=0 Jan 09 11:14:01 crc kubenswrapper[4727]: I0109 11:14:01.733469 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" event={"ID":"23e25abc-b16a-4273-846e-7fab7ef1a095","Type":"ContainerDied","Data":"422ebdc6dd6112f3e20a548d3f702db80a12d85c42b72dbbf30001fd9874275e"} Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.188602 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.280723 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") pod \"23e25abc-b16a-4273-846e-7fab7ef1a095\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.280866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") pod \"23e25abc-b16a-4273-846e-7fab7ef1a095\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.280937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") pod \"23e25abc-b16a-4273-846e-7fab7ef1a095\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.281177 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") pod \"23e25abc-b16a-4273-846e-7fab7ef1a095\" (UID: \"23e25abc-b16a-4273-846e-7fab7ef1a095\") " Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.294807 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "23e25abc-b16a-4273-846e-7fab7ef1a095" (UID: "23e25abc-b16a-4273-846e-7fab7ef1a095"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.294831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm" (OuterVolumeSpecName: "kube-api-access-9bvmm") pod "23e25abc-b16a-4273-846e-7fab7ef1a095" (UID: "23e25abc-b16a-4273-846e-7fab7ef1a095"). InnerVolumeSpecName "kube-api-access-9bvmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.317275 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23e25abc-b16a-4273-846e-7fab7ef1a095" (UID: "23e25abc-b16a-4273-846e-7fab7ef1a095"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.320058 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory" (OuterVolumeSpecName: "inventory") pod "23e25abc-b16a-4273-846e-7fab7ef1a095" (UID: "23e25abc-b16a-4273-846e-7fab7ef1a095"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.384569 4727 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.384613 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.384626 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e25abc-b16a-4273-846e-7fab7ef1a095-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.384637 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bvmm\" (UniqueName: \"kubernetes.io/projected/23e25abc-b16a-4273-846e-7fab7ef1a095-kube-api-access-9bvmm\") on node \"crc\" DevicePath \"\"" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.764239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" event={"ID":"23e25abc-b16a-4273-846e-7fab7ef1a095","Type":"ContainerDied","Data":"e186a8e419b648f807121156f384a6dd0b31f821e18f771ed7229a01613aa47f"} Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.764304 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e186a8e419b648f807121156f384a6dd0b31f821e18f771ed7229a01613aa47f" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.764371 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.899819 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz"] Jan 09 11:14:03 crc kubenswrapper[4727]: E0109 11:14:03.900450 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e25abc-b16a-4273-846e-7fab7ef1a095" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900476 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e25abc-b16a-4273-846e-7fab7ef1a095" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 11:14:03 crc kubenswrapper[4727]: E0109 11:14:03.900491 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="extract-content" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900498 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="extract-content" Jan 09 11:14:03 crc kubenswrapper[4727]: E0109 11:14:03.900530 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="extract-utilities" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900540 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="extract-utilities" Jan 09 11:14:03 crc kubenswrapper[4727]: E0109 11:14:03.900563 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="registry-server" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900569 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="registry-server" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.900973 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e25abc-b16a-4273-846e-7fab7ef1a095" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.901008 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f453764-5e7d-441d-90d0-c96ae96597ef" containerName="registry-server" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.902006 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.905181 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.905317 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.905184 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.908645 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.920752 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz"] Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.996808 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.997008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:03 crc kubenswrapper[4727]: I0109 11:14:03.997036 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.048501 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.058370 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.069730 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6qxrb"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.080068 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7a4c-account-create-update-p6w9f"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.100293 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.100346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.100457 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.104284 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.105440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.123934 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.222620 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.809235 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz"] Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.816928 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.873406 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:04 crc kubenswrapper[4727]: E0109 11:14:04.874146 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.883726 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3fe1de7-6846-464a-8c23-b5cbc944ffaf" path="/var/lib/kubelet/pods/b3fe1de7-6846-464a-8c23-b5cbc944ffaf/volumes" Jan 09 11:14:04 crc kubenswrapper[4727]: I0109 11:14:04.885108 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c54e2e39-4fb7-4ccb-98e4-437653bcc01c" path="/var/lib/kubelet/pods/c54e2e39-4fb7-4ccb-98e4-437653bcc01c/volumes" Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.033734 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.045312 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.055835 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-j2gst"] Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.074320 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9ce5-account-create-update-cgwt7"] Jan 09 11:14:05 crc kubenswrapper[4727]: I0109 11:14:05.790427 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" event={"ID":"79cfc519-9725-4957-b42c-d262651895a3","Type":"ContainerStarted","Data":"57af9f3728f5b4fee091f76e69c7f54b89f80090673fd53559e2fb8320ba3fe4"} Jan 09 11:14:06 crc kubenswrapper[4727]: I0109 11:14:06.804134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" event={"ID":"79cfc519-9725-4957-b42c-d262651895a3","Type":"ContainerStarted","Data":"5d45bc6e13ecbeb42bb2358acab10d095b3fbfd498c6a9f5de9d288fc9598d06"} Jan 09 11:14:06 crc kubenswrapper[4727]: I0109 11:14:06.841136 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" podStartSLOduration=2.905406861 podStartE2EDuration="3.841107549s" podCreationTimestamp="2026-01-09 11:14:03 +0000 UTC" firstStartedPulling="2026-01-09 11:14:04.816534152 +0000 UTC m=+1690.266438933" lastFinishedPulling="2026-01-09 11:14:05.75223483 +0000 UTC m=+1691.202139621" observedRunningTime="2026-01-09 11:14:06.825280651 +0000 UTC m=+1692.275185452" watchObservedRunningTime="2026-01-09 11:14:06.841107549 +0000 UTC m=+1692.291012350" Jan 09 11:14:06 crc kubenswrapper[4727]: I0109 11:14:06.885445 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9" path="/var/lib/kubelet/pods/9fa40d1e-2cbe-4aeb-bb8d-edfa165a6cd9/volumes" Jan 09 11:14:06 crc kubenswrapper[4727]: I0109 11:14:06.886248 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5dba580-00b4-4bed-a734-78ac96b5cd4d" path="/var/lib/kubelet/pods/b5dba580-00b4-4bed-a734-78ac96b5cd4d/volumes" Jan 09 11:14:10 crc kubenswrapper[4727]: I0109 11:14:10.057416 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:14:10 crc kubenswrapper[4727]: I0109 11:14:10.077163 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-65a5-account-create-update-swhhc"] Jan 09 11:14:10 crc kubenswrapper[4727]: I0109 11:14:10.873944 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5471acc-7f1a-4b92-babf-8dea0d8c5a5b" path="/var/lib/kubelet/pods/b5471acc-7f1a-4b92-babf-8dea0d8c5a5b/volumes" Jan 09 11:14:11 crc kubenswrapper[4727]: I0109 11:14:11.041344 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:14:11 crc kubenswrapper[4727]: I0109 11:14:11.051644 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-m6676"] Jan 09 11:14:12 crc kubenswrapper[4727]: I0109 11:14:12.043551 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:14:12 crc kubenswrapper[4727]: I0109 11:14:12.053465 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j9h4f"] Jan 09 11:14:12 crc kubenswrapper[4727]: I0109 11:14:12.872250 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fbdc64-2108-41db-88bd-d978e9ce6550" path="/var/lib/kubelet/pods/14fbdc64-2108-41db-88bd-d978e9ce6550/volumes" Jan 09 11:14:12 crc kubenswrapper[4727]: I0109 11:14:12.873111 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8ff110-0416-4e41-b9cf-a9f622e9a4c8" path="/var/lib/kubelet/pods/5e8ff110-0416-4e41-b9cf-a9f622e9a4c8/volumes" Jan 09 11:14:15 crc kubenswrapper[4727]: I0109 11:14:15.862599 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:15 crc kubenswrapper[4727]: E0109 11:14:15.863481 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.056879 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.067565 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.077642 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.087762 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rllkj"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.096984 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hwqw8"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.105177 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-29t76"] Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.861186 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:30 crc kubenswrapper[4727]: E0109 11:14:30.861584 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.873956 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108eb21f-902c-4942-8be4-9a3b11146c25" path="/var/lib/kubelet/pods/108eb21f-902c-4942-8be4-9a3b11146c25/volumes" Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.874858 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46480603-3f1d-4589-ba8e-9026edee07c7" path="/var/lib/kubelet/pods/46480603-3f1d-4589-ba8e-9026edee07c7/volumes" Jan 09 11:14:30 crc kubenswrapper[4727]: I0109 11:14:30.875485 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14bbd99-7e5d-48ab-8573-ad9c5eea68fb" path="/var/lib/kubelet/pods/c14bbd99-7e5d-48ab-8573-ad9c5eea68fb/volumes" Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.053571 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.066098 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.075802 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-43da-account-create-update-4whcc"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.084203 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d226-account-create-update-7gc64"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.092243 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:14:35 crc kubenswrapper[4727]: I0109 11:14:35.100481 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1dcf-account-create-update-pmcnw"] Jan 09 11:14:36 crc kubenswrapper[4727]: I0109 11:14:36.872973 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d06cd8-5172-4755-93f0-6c6aa036bed8" path="/var/lib/kubelet/pods/22d06cd8-5172-4755-93f0-6c6aa036bed8/volumes" Jan 09 11:14:36 crc kubenswrapper[4727]: I0109 11:14:36.874073 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad382ed-924d-4c03-88b2-63d89690a56a" path="/var/lib/kubelet/pods/4ad382ed-924d-4c03-88b2-63d89690a56a/volumes" Jan 09 11:14:36 crc kubenswrapper[4727]: I0109 11:14:36.874708 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b70879-a5de-4ea1-9db1-82d9f0416a71" path="/var/lib/kubelet/pods/c1b70879-a5de-4ea1-9db1-82d9f0416a71/volumes" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.489949 4727 scope.go:117] "RemoveContainer" containerID="576ae13b814294e919858fca6b483585aa864e6c9996edab682aeeb31d66daf0" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.515633 4727 scope.go:117] "RemoveContainer" containerID="72f21ea3746f823a01ff3632cf334c040301673bdb3b5a878b6260e8b9af266c" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.550917 4727 scope.go:117] "RemoveContainer" containerID="ab5fe13841fb6a09172cc36dfa78a6ba9ea1b1ae3881702694372f050a5fde30" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.590420 4727 scope.go:117] "RemoveContainer" containerID="e1d67c9e3e1b7cbf71977915270fabeef45479ab8480cabc21f2f8f472aa7e01" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.615297 4727 scope.go:117] "RemoveContainer" containerID="5afe7ea6f705be5c16f92e80a56b8b0f094dbbcf85b0af4db628a7dbbeab8019" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.639198 4727 scope.go:117] "RemoveContainer" containerID="fd86d26604fa990daf0250e4ca92d0297bfeb8649e742dfecf596e5d32e6713b" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.686621 4727 scope.go:117] "RemoveContainer" containerID="d929058945f4f976a10c0ad4e38bc8bac084a324f08128e5ad431ba6df04130e" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.732545 4727 scope.go:117] "RemoveContainer" containerID="dfac37bf01ecc72f7cbe4e36980b1d63912e58d44854fd22b7eb51acb67a3482" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.792348 4727 scope.go:117] "RemoveContainer" containerID="4b638c817b29ed248546a516c2f4dc54b3f00561caeb3b5322db912d38b8ae1d" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.837975 4727 scope.go:117] "RemoveContainer" containerID="a4b50d5c7e5a2ac088b99192a0ef8ae1f0162a1bb12adc59cf61c748194423e5" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.872470 4727 scope.go:117] "RemoveContainer" containerID="d6959b7da986b00bc70e51fdf39956f346afe58b899a2e451f5f896031407d83" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.895750 4727 scope.go:117] "RemoveContainer" containerID="8cbbc5a0e078338f400d60c2f06eefdbda48f9727dc50c6209388201bc809674" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.923399 4727 scope.go:117] "RemoveContainer" containerID="958624eb08021ff7266f8cba72d352da3762bd6dc61b65c471a77ceb84f652f5" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.950428 4727 scope.go:117] "RemoveContainer" containerID="9351498b0abda3f72f1c19e54b7af5df2296f0bc4d77538fe4e01b4ae9d47180" Jan 09 11:14:37 crc kubenswrapper[4727]: I0109 11:14:37.979881 4727 scope.go:117] "RemoveContainer" containerID="538236df2e722658ac6062177b9a40be31fb73d68537a811c36bed8ec6ebd0f2" Jan 09 11:14:38 crc kubenswrapper[4727]: I0109 11:14:38.007265 4727 scope.go:117] "RemoveContainer" containerID="29e8e8db2a35769af205e4fe07dfcb0f161be2135de38c69be53aa1504c48cb3" Jan 09 11:14:38 crc kubenswrapper[4727]: I0109 11:14:38.034922 4727 scope.go:117] "RemoveContainer" containerID="00e330dc8e4d5563bc7056af16edc5bfdbab81ae265d410bf050c38028359c89" Jan 09 11:14:38 crc kubenswrapper[4727]: I0109 11:14:38.058087 4727 scope.go:117] "RemoveContainer" containerID="1263ecb7bda875303dddab37976768c97598ef07433b73e25914d8e050a30df9" Jan 09 11:14:43 crc kubenswrapper[4727]: I0109 11:14:43.035778 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:14:43 crc kubenswrapper[4727]: I0109 11:14:43.051430 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4xh9m"] Jan 09 11:14:43 crc kubenswrapper[4727]: I0109 11:14:43.861001 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:43 crc kubenswrapper[4727]: E0109 11:14:43.861319 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:44 crc kubenswrapper[4727]: I0109 11:14:44.870805 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64657563-7e2f-46ef-a906-37e42398662a" path="/var/lib/kubelet/pods/64657563-7e2f-46ef-a906-37e42398662a/volumes" Jan 09 11:14:55 crc kubenswrapper[4727]: I0109 11:14:55.860329 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:14:55 crc kubenswrapper[4727]: E0109 11:14:55.863107 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:14:57 crc kubenswrapper[4727]: I0109 11:14:57.042401 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:14:57 crc kubenswrapper[4727]: I0109 11:14:57.054922 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9gv8v"] Jan 09 11:14:58 crc kubenswrapper[4727]: I0109 11:14:58.873114 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5667805-aff5-4227-88df-2d2440259e9b" path="/var/lib/kubelet/pods/e5667805-aff5-4227-88df-2d2440259e9b/volumes" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.154702 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.156877 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.159757 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.161764 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.191016 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.249012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.249071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.249174 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.351434 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.351586 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.351613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.353090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.361876 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.376883 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") pod \"collect-profiles-29465955-d2jgp\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.491624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:00 crc kubenswrapper[4727]: I0109 11:15:00.984086 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 11:15:01 crc kubenswrapper[4727]: I0109 11:15:01.476016 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" event={"ID":"12b68a71-edf6-4fe6-8f5c-92b1424309c6","Type":"ContainerStarted","Data":"84a8b1baf290e07735a8257dd39380cfb20abc093c31bd1ad4ffdd674f8e0709"} Jan 09 11:15:01 crc kubenswrapper[4727]: I0109 11:15:01.476085 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" event={"ID":"12b68a71-edf6-4fe6-8f5c-92b1424309c6","Type":"ContainerStarted","Data":"a36b7da4874459996f33c478062bdddcae1fa2f17cd5ed34a370f5e59ba860df"} Jan 09 11:15:01 crc kubenswrapper[4727]: I0109 11:15:01.510556 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" podStartSLOduration=1.510529346 podStartE2EDuration="1.510529346s" podCreationTimestamp="2026-01-09 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:15:01.506315265 +0000 UTC m=+1746.956220036" watchObservedRunningTime="2026-01-09 11:15:01.510529346 +0000 UTC m=+1746.960434127" Jan 09 11:15:02 crc kubenswrapper[4727]: I0109 11:15:02.487886 4727 generic.go:334] "Generic (PLEG): container finished" podID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" containerID="84a8b1baf290e07735a8257dd39380cfb20abc093c31bd1ad4ffdd674f8e0709" exitCode=0 Jan 09 11:15:02 crc kubenswrapper[4727]: I0109 11:15:02.487976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" event={"ID":"12b68a71-edf6-4fe6-8f5c-92b1424309c6","Type":"ContainerDied","Data":"84a8b1baf290e07735a8257dd39380cfb20abc093c31bd1ad4ffdd674f8e0709"} Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.854072 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.947666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") pod \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.947787 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") pod \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.947923 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") pod \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\" (UID: \"12b68a71-edf6-4fe6-8f5c-92b1424309c6\") " Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.949201 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "12b68a71-edf6-4fe6-8f5c-92b1424309c6" (UID: "12b68a71-edf6-4fe6-8f5c-92b1424309c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.949678 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12b68a71-edf6-4fe6-8f5c-92b1424309c6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.955371 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "12b68a71-edf6-4fe6-8f5c-92b1424309c6" (UID: "12b68a71-edf6-4fe6-8f5c-92b1424309c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:15:03 crc kubenswrapper[4727]: I0109 11:15:03.957004 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k" (OuterVolumeSpecName: "kube-api-access-djt7k") pod "12b68a71-edf6-4fe6-8f5c-92b1424309c6" (UID: "12b68a71-edf6-4fe6-8f5c-92b1424309c6"). InnerVolumeSpecName "kube-api-access-djt7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.052418 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djt7k\" (UniqueName: \"kubernetes.io/projected/12b68a71-edf6-4fe6-8f5c-92b1424309c6-kube-api-access-djt7k\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.052477 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12b68a71-edf6-4fe6-8f5c-92b1424309c6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.513723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" event={"ID":"12b68a71-edf6-4fe6-8f5c-92b1424309c6","Type":"ContainerDied","Data":"a36b7da4874459996f33c478062bdddcae1fa2f17cd5ed34a370f5e59ba860df"} Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.514159 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a36b7da4874459996f33c478062bdddcae1fa2f17cd5ed34a370f5e59ba860df" Jan 09 11:15:04 crc kubenswrapper[4727]: I0109 11:15:04.513859 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp" Jan 09 11:15:07 crc kubenswrapper[4727]: I0109 11:15:07.860380 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:15:07 crc kubenswrapper[4727]: E0109 11:15:07.861268 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:15:22 crc kubenswrapper[4727]: I0109 11:15:22.861634 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:15:22 crc kubenswrapper[4727]: E0109 11:15:22.862848 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:15:35 crc kubenswrapper[4727]: I0109 11:15:35.045098 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:15:35 crc kubenswrapper[4727]: I0109 11:15:35.054108 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mfhnm"] Jan 09 11:15:35 crc kubenswrapper[4727]: I0109 11:15:35.860295 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:15:35 crc kubenswrapper[4727]: E0109 11:15:35.861177 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:15:36 crc kubenswrapper[4727]: I0109 11:15:36.873413 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1" path="/var/lib/kubelet/pods/0ff9bafc-f2a0-49f1-8891-f5fc57ac5fc1/volumes" Jan 09 11:15:38 crc kubenswrapper[4727]: I0109 11:15:38.378669 4727 scope.go:117] "RemoveContainer" containerID="6be1414eb15f0ac6ed0ef2cab14a7cb32708b69c107a79d057f310cc4c8112f8" Jan 09 11:15:38 crc kubenswrapper[4727]: I0109 11:15:38.420028 4727 scope.go:117] "RemoveContainer" containerID="9cc57525cba176e3b38766a0b9073b9830c2d27df97aab2c1ef96988dfb68aef" Jan 09 11:15:38 crc kubenswrapper[4727]: I0109 11:15:38.500596 4727 scope.go:117] "RemoveContainer" containerID="61bc0d937c4302ec43f2337bd6079d8b8e9363e85a2c20cc0255fb3a8011cb0e" Jan 09 11:15:42 crc kubenswrapper[4727]: I0109 11:15:42.908736 4727 generic.go:334] "Generic (PLEG): container finished" podID="79cfc519-9725-4957-b42c-d262651895a3" containerID="5d45bc6e13ecbeb42bb2358acab10d095b3fbfd498c6a9f5de9d288fc9598d06" exitCode=0 Jan 09 11:15:42 crc kubenswrapper[4727]: I0109 11:15:42.908843 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" event={"ID":"79cfc519-9725-4957-b42c-d262651895a3","Type":"ContainerDied","Data":"5d45bc6e13ecbeb42bb2358acab10d095b3fbfd498c6a9f5de9d288fc9598d06"} Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.428777 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.627414 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") pod \"79cfc519-9725-4957-b42c-d262651895a3\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.627467 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") pod \"79cfc519-9725-4957-b42c-d262651895a3\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.627610 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") pod \"79cfc519-9725-4957-b42c-d262651895a3\" (UID: \"79cfc519-9725-4957-b42c-d262651895a3\") " Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.637121 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg" (OuterVolumeSpecName: "kube-api-access-l9qkg") pod "79cfc519-9725-4957-b42c-d262651895a3" (UID: "79cfc519-9725-4957-b42c-d262651895a3"). InnerVolumeSpecName "kube-api-access-l9qkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.667297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory" (OuterVolumeSpecName: "inventory") pod "79cfc519-9725-4957-b42c-d262651895a3" (UID: "79cfc519-9725-4957-b42c-d262651895a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.670215 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79cfc519-9725-4957-b42c-d262651895a3" (UID: "79cfc519-9725-4957-b42c-d262651895a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.731503 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.731992 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9qkg\" (UniqueName: \"kubernetes.io/projected/79cfc519-9725-4957-b42c-d262651895a3-kube-api-access-l9qkg\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.732007 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cfc519-9725-4957-b42c-d262651895a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.944028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" event={"ID":"79cfc519-9725-4957-b42c-d262651895a3","Type":"ContainerDied","Data":"57af9f3728f5b4fee091f76e69c7f54b89f80090673fd53559e2fb8320ba3fe4"} Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.944090 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57af9f3728f5b4fee091f76e69c7f54b89f80090673fd53559e2fb8320ba3fe4" Jan 09 11:15:44 crc kubenswrapper[4727]: I0109 11:15:44.944175 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.028976 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn"] Jan 09 11:15:45 crc kubenswrapper[4727]: E0109 11:15:45.029671 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" containerName="collect-profiles" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.029693 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" containerName="collect-profiles" Jan 09 11:15:45 crc kubenswrapper[4727]: E0109 11:15:45.029708 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cfc519-9725-4957-b42c-d262651895a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.029735 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cfc519-9725-4957-b42c-d262651895a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.029973 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="79cfc519-9725-4957-b42c-d262651895a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.029994 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" containerName="collect-profiles" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.030942 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.033821 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.034133 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.034140 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.034832 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.040689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.040756 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.040891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.041440 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn"] Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.142347 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.142428 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.142462 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.148118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.164947 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.165545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x2djn\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.359126 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:15:45 crc kubenswrapper[4727]: I0109 11:15:45.946159 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn"] Jan 09 11:15:46 crc kubenswrapper[4727]: I0109 11:15:46.975336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" event={"ID":"f1169cca-13ce-4a18-8901-faa73fc5b913","Type":"ContainerStarted","Data":"91d16d30258f1cc31f93c452febe85edd90e3ff593872257f558252b50c50686"} Jan 09 11:15:46 crc kubenswrapper[4727]: I0109 11:15:46.976846 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" event={"ID":"f1169cca-13ce-4a18-8901-faa73fc5b913","Type":"ContainerStarted","Data":"a9e4aceb8fa35aad5c632fb89183c182f77eac6d44e4296f26ed7e363decc7c6"} Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.044326 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" podStartSLOduration=2.336935045 podStartE2EDuration="3.044302801s" podCreationTimestamp="2026-01-09 11:15:45 +0000 UTC" firstStartedPulling="2026-01-09 11:15:45.964658732 +0000 UTC m=+1791.414563513" lastFinishedPulling="2026-01-09 11:15:46.672026498 +0000 UTC m=+1792.121931269" observedRunningTime="2026-01-09 11:15:47.002169546 +0000 UTC m=+1792.452074327" watchObservedRunningTime="2026-01-09 11:15:48.044302801 +0000 UTC m=+1793.494207592" Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.053536 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.064526 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.076179 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pss24"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.087219 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.096363 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-56tkr"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.103804 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nd4pq"] Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.860663 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:15:48 crc kubenswrapper[4727]: E0109 11:15:48.860994 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.875025 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695f5777-ca94-4fee-9620-b22eb2a2d9ab" path="/var/lib/kubelet/pods/695f5777-ca94-4fee-9620-b22eb2a2d9ab/volumes" Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.876162 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790d27d6-9817-413b-b711-f0be91104704" path="/var/lib/kubelet/pods/790d27d6-9817-413b-b711-f0be91104704/volumes" Jan 09 11:15:48 crc kubenswrapper[4727]: I0109 11:15:48.876820 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52e2c52-54f3-4f0d-9244-1ce7563deb78" path="/var/lib/kubelet/pods/a52e2c52-54f3-4f0d-9244-1ce7563deb78/volumes" Jan 09 11:16:02 crc kubenswrapper[4727]: I0109 11:16:02.037845 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:16:02 crc kubenswrapper[4727]: I0109 11:16:02.049557 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5c72l"] Jan 09 11:16:02 crc kubenswrapper[4727]: I0109 11:16:02.873453 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f7de868-87b0-49c7-ad5e-7c528f181550" path="/var/lib/kubelet/pods/5f7de868-87b0-49c7-ad5e-7c528f181550/volumes" Jan 09 11:16:03 crc kubenswrapper[4727]: I0109 11:16:03.861275 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:03 crc kubenswrapper[4727]: E0109 11:16:03.862848 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:18 crc kubenswrapper[4727]: I0109 11:16:18.860219 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:18 crc kubenswrapper[4727]: E0109 11:16:18.861627 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:33 crc kubenswrapper[4727]: I0109 11:16:33.859761 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:33 crc kubenswrapper[4727]: E0109 11:16:33.861206 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:38 crc kubenswrapper[4727]: I0109 11:16:38.618363 4727 scope.go:117] "RemoveContainer" containerID="8c9da7dfda5f54940ae00f9c9f6c3b6698ce4b0778b3db11c1d23ada8f68d4ff" Jan 09 11:16:38 crc kubenswrapper[4727]: I0109 11:16:38.669409 4727 scope.go:117] "RemoveContainer" containerID="84958f6b4b1fed9a71a0c9b91b8932532196b305e36de04af4bb1e1f000f02e6" Jan 09 11:16:38 crc kubenswrapper[4727]: I0109 11:16:38.728022 4727 scope.go:117] "RemoveContainer" containerID="3f10c6f5c18146a5828c011f330fbca4b0beff7019c56065bfcca5a0b8a923d4" Jan 09 11:16:38 crc kubenswrapper[4727]: I0109 11:16:38.777233 4727 scope.go:117] "RemoveContainer" containerID="8ef6c402149050d5ff055a91a31e2129cc3c102e06f0b1d118c263501750d617" Jan 09 11:16:46 crc kubenswrapper[4727]: I0109 11:16:46.861815 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:46 crc kubenswrapper[4727]: E0109 11:16:46.862956 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.052760 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.060118 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.068983 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.076895 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-q4g4f"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.084688 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ljc8f"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.093331 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-911e-account-create-update-hznc7"] Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.877198 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d27352-2f68-4ced-a541-7bbd8bf33fb1" path="/var/lib/kubelet/pods/37d27352-2f68-4ced-a541-7bbd8bf33fb1/volumes" Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.878656 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c40808-e98b-4a31-b057-5c5b38ed5774" path="/var/lib/kubelet/pods/b7c40808-e98b-4a31-b057-5c5b38ed5774/volumes" Jan 09 11:16:56 crc kubenswrapper[4727]: I0109 11:16:56.881340 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2c02d0-08f3-4174-a1a1-44b6b99df774" path="/var/lib/kubelet/pods/bf2c02d0-08f3-4174-a1a1-44b6b99df774/volumes" Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.034918 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.044158 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.053432 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.066640 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qftd4"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.076442 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0b0c-account-create-update-txznh"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.087271 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-bf38-account-create-update-j6vxl"] Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.483860 4727 generic.go:334] "Generic (PLEG): container finished" podID="f1169cca-13ce-4a18-8901-faa73fc5b913" containerID="91d16d30258f1cc31f93c452febe85edd90e3ff593872257f558252b50c50686" exitCode=0 Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.483950 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" event={"ID":"f1169cca-13ce-4a18-8901-faa73fc5b913","Type":"ContainerDied","Data":"91d16d30258f1cc31f93c452febe85edd90e3ff593872257f558252b50c50686"} Jan 09 11:16:57 crc kubenswrapper[4727]: I0109 11:16:57.859974 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:16:57 crc kubenswrapper[4727]: E0109 11:16:57.860361 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.877903 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e56a97-f683-4290-b69b-ab92efd58b4c" path="/var/lib/kubelet/pods/21e56a97-f683-4290-b69b-ab92efd58b4c/volumes" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.879150 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784df696-fe59-4d64-841e-53fa77ded98f" path="/var/lib/kubelet/pods/784df696-fe59-4d64-841e-53fa77ded98f/volumes" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.879881 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a403535a-35d2-487c-9fab-20360257ec11" path="/var/lib/kubelet/pods/a403535a-35d2-487c-9fab-20360257ec11/volumes" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.962926 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.980676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") pod \"f1169cca-13ce-4a18-8901-faa73fc5b913\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.980935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") pod \"f1169cca-13ce-4a18-8901-faa73fc5b913\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.980999 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") pod \"f1169cca-13ce-4a18-8901-faa73fc5b913\" (UID: \"f1169cca-13ce-4a18-8901-faa73fc5b913\") " Jan 09 11:16:58 crc kubenswrapper[4727]: I0109 11:16:58.998938 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b" (OuterVolumeSpecName: "kube-api-access-lkp2b") pod "f1169cca-13ce-4a18-8901-faa73fc5b913" (UID: "f1169cca-13ce-4a18-8901-faa73fc5b913"). InnerVolumeSpecName "kube-api-access-lkp2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.020755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory" (OuterVolumeSpecName: "inventory") pod "f1169cca-13ce-4a18-8901-faa73fc5b913" (UID: "f1169cca-13ce-4a18-8901-faa73fc5b913"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.020824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f1169cca-13ce-4a18-8901-faa73fc5b913" (UID: "f1169cca-13ce-4a18-8901-faa73fc5b913"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.087693 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.087746 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkp2b\" (UniqueName: \"kubernetes.io/projected/f1169cca-13ce-4a18-8901-faa73fc5b913-kube-api-access-lkp2b\") on node \"crc\" DevicePath \"\"" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.087764 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f1169cca-13ce-4a18-8901-faa73fc5b913-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.507990 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" event={"ID":"f1169cca-13ce-4a18-8901-faa73fc5b913","Type":"ContainerDied","Data":"a9e4aceb8fa35aad5c632fb89183c182f77eac6d44e4296f26ed7e363decc7c6"} Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.508056 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e4aceb8fa35aad5c632fb89183c182f77eac6d44e4296f26ed7e363decc7c6" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.508107 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x2djn" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.618193 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz"] Jan 09 11:16:59 crc kubenswrapper[4727]: E0109 11:16:59.618942 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1169cca-13ce-4a18-8901-faa73fc5b913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.618965 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1169cca-13ce-4a18-8901-faa73fc5b913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.619245 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1169cca-13ce-4a18-8901-faa73fc5b913" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.620301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.623381 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.623796 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.624055 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.625479 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.633619 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz"] Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.707182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.707757 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.707986 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.810753 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.810832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.810910 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.819540 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.819909 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.829891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-m4njz\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:16:59 crc kubenswrapper[4727]: I0109 11:16:59.985732 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:17:00 crc kubenswrapper[4727]: I0109 11:17:00.552914 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz"] Jan 09 11:17:01 crc kubenswrapper[4727]: I0109 11:17:01.531021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" event={"ID":"6811cbf2-94eb-44a0-ae3e-8f0e35163df5","Type":"ContainerStarted","Data":"51793d55c847f74bc3bf3ec9d732c6df90a9d058d7d7fce61b22ff4a0274ebfc"} Jan 09 11:17:01 crc kubenswrapper[4727]: I0109 11:17:01.532108 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" event={"ID":"6811cbf2-94eb-44a0-ae3e-8f0e35163df5","Type":"ContainerStarted","Data":"79387a40119342939bcab0cc5d57f57fd1e62ca05cbd7411244c7bf1e5ba9ffc"} Jan 09 11:17:01 crc kubenswrapper[4727]: I0109 11:17:01.563836 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" podStartSLOduration=1.866388975 podStartE2EDuration="2.563806629s" podCreationTimestamp="2026-01-09 11:16:59 +0000 UTC" firstStartedPulling="2026-01-09 11:17:00.55869291 +0000 UTC m=+1866.008597691" lastFinishedPulling="2026-01-09 11:17:01.256110564 +0000 UTC m=+1866.706015345" observedRunningTime="2026-01-09 11:17:01.550531068 +0000 UTC m=+1867.000435859" watchObservedRunningTime="2026-01-09 11:17:01.563806629 +0000 UTC m=+1867.013711480" Jan 09 11:17:06 crc kubenswrapper[4727]: I0109 11:17:06.582227 4727 generic.go:334] "Generic (PLEG): container finished" podID="6811cbf2-94eb-44a0-ae3e-8f0e35163df5" containerID="51793d55c847f74bc3bf3ec9d732c6df90a9d058d7d7fce61b22ff4a0274ebfc" exitCode=0 Jan 09 11:17:06 crc kubenswrapper[4727]: I0109 11:17:06.582324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" event={"ID":"6811cbf2-94eb-44a0-ae3e-8f0e35163df5","Type":"ContainerDied","Data":"51793d55c847f74bc3bf3ec9d732c6df90a9d058d7d7fce61b22ff4a0274ebfc"} Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.047114 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.127209 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") pod \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.127452 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") pod \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.127627 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") pod \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\" (UID: \"6811cbf2-94eb-44a0-ae3e-8f0e35163df5\") " Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.137114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v" (OuterVolumeSpecName: "kube-api-access-47d7v") pod "6811cbf2-94eb-44a0-ae3e-8f0e35163df5" (UID: "6811cbf2-94eb-44a0-ae3e-8f0e35163df5"). InnerVolumeSpecName "kube-api-access-47d7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.160758 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory" (OuterVolumeSpecName: "inventory") pod "6811cbf2-94eb-44a0-ae3e-8f0e35163df5" (UID: "6811cbf2-94eb-44a0-ae3e-8f0e35163df5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.163333 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6811cbf2-94eb-44a0-ae3e-8f0e35163df5" (UID: "6811cbf2-94eb-44a0-ae3e-8f0e35163df5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.230677 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.230749 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47d7v\" (UniqueName: \"kubernetes.io/projected/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-kube-api-access-47d7v\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.230765 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6811cbf2-94eb-44a0-ae3e-8f0e35163df5-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.605066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" event={"ID":"6811cbf2-94eb-44a0-ae3e-8f0e35163df5","Type":"ContainerDied","Data":"79387a40119342939bcab0cc5d57f57fd1e62ca05cbd7411244c7bf1e5ba9ffc"} Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.605128 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79387a40119342939bcab0cc5d57f57fd1e62ca05cbd7411244c7bf1e5ba9ffc" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.605166 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-m4njz" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.714110 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr"] Jan 09 11:17:08 crc kubenswrapper[4727]: E0109 11:17:08.715169 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6811cbf2-94eb-44a0-ae3e-8f0e35163df5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.715194 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6811cbf2-94eb-44a0-ae3e-8f0e35163df5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.715708 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6811cbf2-94eb-44a0-ae3e-8f0e35163df5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.717004 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.725911 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr"] Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.726041 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.726224 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.726379 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.726775 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.745823 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.746114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.746203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.848176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.848279 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.848338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.853742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.855297 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:08 crc kubenswrapper[4727]: I0109 11:17:08.871446 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qs4rr\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:09 crc kubenswrapper[4727]: I0109 11:17:09.060598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:09 crc kubenswrapper[4727]: I0109 11:17:09.683365 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr"] Jan 09 11:17:10 crc kubenswrapper[4727]: I0109 11:17:10.641503 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" event={"ID":"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea","Type":"ContainerStarted","Data":"8d43841106431a9a04b8882c51eb37251279334a2faf489f800d4dba1b0a8b62"} Jan 09 11:17:10 crc kubenswrapper[4727]: I0109 11:17:10.642104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" event={"ID":"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea","Type":"ContainerStarted","Data":"62dba88ce732c071ff647fe31a5b2b8808665fa163c36a298b388ac3c44202b9"} Jan 09 11:17:10 crc kubenswrapper[4727]: I0109 11:17:10.670678 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" podStartSLOduration=2.146475433 podStartE2EDuration="2.670648918s" podCreationTimestamp="2026-01-09 11:17:08 +0000 UTC" firstStartedPulling="2026-01-09 11:17:09.687896098 +0000 UTC m=+1875.137800879" lastFinishedPulling="2026-01-09 11:17:10.212069583 +0000 UTC m=+1875.661974364" observedRunningTime="2026-01-09 11:17:10.661948629 +0000 UTC m=+1876.111853410" watchObservedRunningTime="2026-01-09 11:17:10.670648918 +0000 UTC m=+1876.120553699" Jan 09 11:17:10 crc kubenswrapper[4727]: I0109 11:17:10.861056 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:17:11 crc kubenswrapper[4727]: I0109 11:17:11.686853 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd"} Jan 09 11:17:25 crc kubenswrapper[4727]: I0109 11:17:25.050225 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:17:25 crc kubenswrapper[4727]: I0109 11:17:25.060097 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6d58k"] Jan 09 11:17:26 crc kubenswrapper[4727]: I0109 11:17:26.880878 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c213a7-1f1e-4866-aa20-019382b42f61" path="/var/lib/kubelet/pods/88c213a7-1f1e-4866-aa20-019382b42f61/volumes" Jan 09 11:17:38 crc kubenswrapper[4727]: I0109 11:17:38.891189 4727 scope.go:117] "RemoveContainer" containerID="f947874cac612f305507a7bdaf8471df8d3875799b74261e1f17af4a0dc3c24e" Jan 09 11:17:38 crc kubenswrapper[4727]: I0109 11:17:38.926206 4727 scope.go:117] "RemoveContainer" containerID="478ae5028a10c820659c5824f58f2f2a67e0f6b5335c5e28c9b5c14e796d35bd" Jan 09 11:17:38 crc kubenswrapper[4727]: I0109 11:17:38.981801 4727 scope.go:117] "RemoveContainer" containerID="e676a05fb9d1c98d54b7cea14e300f90879e2096ab0fd5ac713c7a29a48935ac" Jan 09 11:17:39 crc kubenswrapper[4727]: I0109 11:17:39.048641 4727 scope.go:117] "RemoveContainer" containerID="ddf7504037a0d74d61286b57ca98d5ca4686f34d2f909e9a72a2f12480874e58" Jan 09 11:17:39 crc kubenswrapper[4727]: I0109 11:17:39.075465 4727 scope.go:117] "RemoveContainer" containerID="e988691ee87e2cfbc967d0e1c928312ff506c1b705fdf61fd63802fa468dc6ff" Jan 09 11:17:39 crc kubenswrapper[4727]: I0109 11:17:39.126108 4727 scope.go:117] "RemoveContainer" containerID="339bcb56de0d0083e60bb9f99ee6710c9861edb4bb896039162501a9d46ed6ed" Jan 09 11:17:39 crc kubenswrapper[4727]: I0109 11:17:39.175015 4727 scope.go:117] "RemoveContainer" containerID="c3ed6956b8e31f8503a62e89b83a4ac7a7d349bbdaa2c48c86045a4720314a5c" Jan 09 11:17:50 crc kubenswrapper[4727]: I0109 11:17:50.062421 4727 generic.go:334] "Generic (PLEG): container finished" podID="e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" containerID="8d43841106431a9a04b8882c51eb37251279334a2faf489f800d4dba1b0a8b62" exitCode=0 Jan 09 11:17:50 crc kubenswrapper[4727]: I0109 11:17:50.062558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" event={"ID":"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea","Type":"ContainerDied","Data":"8d43841106431a9a04b8882c51eb37251279334a2faf489f800d4dba1b0a8b62"} Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.510837 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.576779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") pod \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.576864 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") pod \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.577037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") pod \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\" (UID: \"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea\") " Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.584299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm" (OuterVolumeSpecName: "kube-api-access-r8gkm") pod "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" (UID: "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea"). InnerVolumeSpecName "kube-api-access-r8gkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.610596 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory" (OuterVolumeSpecName: "inventory") pod "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" (UID: "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.618879 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" (UID: "e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.679118 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gkm\" (UniqueName: \"kubernetes.io/projected/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-kube-api-access-r8gkm\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.679161 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:51 crc kubenswrapper[4727]: I0109 11:17:51.679178 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.108624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" event={"ID":"e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea","Type":"ContainerDied","Data":"62dba88ce732c071ff647fe31a5b2b8808665fa163c36a298b388ac3c44202b9"} Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.109793 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62dba88ce732c071ff647fe31a5b2b8808665fa163c36a298b388ac3c44202b9" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.109204 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qs4rr" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.190096 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s"] Jan 09 11:17:52 crc kubenswrapper[4727]: E0109 11:17:52.190967 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.190984 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.191175 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.191948 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.197734 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.198009 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.198125 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.198194 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.208438 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s"] Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.293409 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.293526 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.293710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.395432 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.395571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.395624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.400632 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.401136 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.416428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l88s\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.511415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:17:52 crc kubenswrapper[4727]: I0109 11:17:52.895576 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s"] Jan 09 11:17:53 crc kubenswrapper[4727]: I0109 11:17:53.046098 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:17:53 crc kubenswrapper[4727]: I0109 11:17:53.054051 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bd2gt"] Jan 09 11:17:53 crc kubenswrapper[4727]: I0109 11:17:53.133725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" event={"ID":"fc6114d6-7052-46b3-a8e5-c8b9731cc92c","Type":"ContainerStarted","Data":"34ab025612c8accc0a3d077ad7711b72fb3a0786386f472ea626ccc61d8251ab"} Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.034272 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.044516 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-br2nr"] Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.143580 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" event={"ID":"fc6114d6-7052-46b3-a8e5-c8b9731cc92c","Type":"ContainerStarted","Data":"24b8cb7c86256279d4e47319da3d87c1e5d0fd8bd60aa38b7566a705e7d9003f"} Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.167865 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" podStartSLOduration=1.7342295330000002 podStartE2EDuration="2.16783265s" podCreationTimestamp="2026-01-09 11:17:52 +0000 UTC" firstStartedPulling="2026-01-09 11:17:52.898279985 +0000 UTC m=+1918.348184766" lastFinishedPulling="2026-01-09 11:17:53.331883102 +0000 UTC m=+1918.781787883" observedRunningTime="2026-01-09 11:17:54.162816588 +0000 UTC m=+1919.612721389" watchObservedRunningTime="2026-01-09 11:17:54.16783265 +0000 UTC m=+1919.617737501" Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.877078 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10127ac2-1ffe-4ad6-b483-ff5952f88b4a" path="/var/lib/kubelet/pods/10127ac2-1ffe-4ad6-b483-ff5952f88b4a/volumes" Jan 09 11:17:54 crc kubenswrapper[4727]: I0109 11:17:54.877761 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c95f5eef-fff8-427b-9318-ebfcf188f0a9" path="/var/lib/kubelet/pods/c95f5eef-fff8-427b-9318-ebfcf188f0a9/volumes" Jan 09 11:18:27 crc kubenswrapper[4727]: I0109 11:18:27.996966 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.000779 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.008671 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.094387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.094575 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.094694 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.197422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.197538 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.197621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.198371 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.198610 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.228833 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") pod \"redhat-operators-vmhd8\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.333579 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:28 crc kubenswrapper[4727]: I0109 11:18:28.838490 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.383605 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.387351 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.403489 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.517585 4727 generic.go:334] "Generic (PLEG): container finished" podID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerID="9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1" exitCode=0 Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.517670 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerDied","Data":"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1"} Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.517741 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerStarted","Data":"4296e666b17eebb6a3607981a9d50b3da27e937ddf21b913930259e0af4c499e"} Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.532992 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.533096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.533159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.635302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.635445 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.635522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.635978 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.636661 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.662459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") pod \"redhat-marketplace-kg722\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:29 crc kubenswrapper[4727]: I0109 11:18:29.707864 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:30 crc kubenswrapper[4727]: I0109 11:18:30.253858 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:30 crc kubenswrapper[4727]: I0109 11:18:30.531991 4727 generic.go:334] "Generic (PLEG): container finished" podID="282e6323-e597-4905-a7d7-f885b7eff305" containerID="011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804" exitCode=0 Jan 09 11:18:30 crc kubenswrapper[4727]: I0109 11:18:30.532059 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerDied","Data":"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804"} Jan 09 11:18:30 crc kubenswrapper[4727]: I0109 11:18:30.532095 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerStarted","Data":"94dea0e522fa183593e3777105081b07c39058a2c243f7c9790b7aade563bd6c"} Jan 09 11:18:31 crc kubenswrapper[4727]: I0109 11:18:31.548672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerStarted","Data":"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2"} Jan 09 11:18:32 crc kubenswrapper[4727]: I0109 11:18:32.561485 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerStarted","Data":"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6"} Jan 09 11:18:33 crc kubenswrapper[4727]: I0109 11:18:33.574676 4727 generic.go:334] "Generic (PLEG): container finished" podID="282e6323-e597-4905-a7d7-f885b7eff305" containerID="56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6" exitCode=0 Jan 09 11:18:33 crc kubenswrapper[4727]: I0109 11:18:33.574836 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerDied","Data":"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6"} Jan 09 11:18:33 crc kubenswrapper[4727]: I0109 11:18:33.586025 4727 generic.go:334] "Generic (PLEG): container finished" podID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerID="faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2" exitCode=0 Jan 09 11:18:33 crc kubenswrapper[4727]: I0109 11:18:33.586091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerDied","Data":"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2"} Jan 09 11:18:34 crc kubenswrapper[4727]: I0109 11:18:34.598042 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerStarted","Data":"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb"} Jan 09 11:18:34 crc kubenswrapper[4727]: I0109 11:18:34.621566 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kg722" podStartSLOduration=1.907251868 podStartE2EDuration="5.621537895s" podCreationTimestamp="2026-01-09 11:18:29 +0000 UTC" firstStartedPulling="2026-01-09 11:18:30.534157895 +0000 UTC m=+1955.984062666" lastFinishedPulling="2026-01-09 11:18:34.248443912 +0000 UTC m=+1959.698348693" observedRunningTime="2026-01-09 11:18:34.619442879 +0000 UTC m=+1960.069347660" watchObservedRunningTime="2026-01-09 11:18:34.621537895 +0000 UTC m=+1960.071442706" Jan 09 11:18:35 crc kubenswrapper[4727]: I0109 11:18:35.621157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerStarted","Data":"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67"} Jan 09 11:18:35 crc kubenswrapper[4727]: I0109 11:18:35.648204 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmhd8" podStartSLOduration=3.791780057 podStartE2EDuration="8.648176225s" podCreationTimestamp="2026-01-09 11:18:27 +0000 UTC" firstStartedPulling="2026-01-09 11:18:29.519953456 +0000 UTC m=+1954.969858237" lastFinishedPulling="2026-01-09 11:18:34.376349624 +0000 UTC m=+1959.826254405" observedRunningTime="2026-01-09 11:18:35.643151762 +0000 UTC m=+1961.093056563" watchObservedRunningTime="2026-01-09 11:18:35.648176225 +0000 UTC m=+1961.098081006" Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.057109 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.066820 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtb77"] Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.333774 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.333880 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:38 crc kubenswrapper[4727]: I0109 11:18:38.873613 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd540af1-9862-4759-ad16-587bbd49fea1" path="/var/lib/kubelet/pods/fd540af1-9862-4759-ad16-587bbd49fea1/volumes" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.329958 4727 scope.go:117] "RemoveContainer" containerID="dc066e04c47aa4447236d231652b0e4e8be0db4783c245457a692ac5259ca534" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.388479 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vmhd8" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" probeResult="failure" output=< Jan 09 11:18:39 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 11:18:39 crc kubenswrapper[4727]: > Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.414708 4727 scope.go:117] "RemoveContainer" containerID="f76d88f648ab447092c643e9a74e7887bbdfb7003074d297848426f81f8aa677" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.470114 4727 scope.go:117] "RemoveContainer" containerID="2149f5b1c0ab1c82602e241d07a77642b5d9e612402ac4639e68a30682922072" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.708776 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.708854 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:39 crc kubenswrapper[4727]: I0109 11:18:39.760848 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:40 crc kubenswrapper[4727]: I0109 11:18:40.727895 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:40 crc kubenswrapper[4727]: I0109 11:18:40.791950 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:42 crc kubenswrapper[4727]: I0109 11:18:42.712690 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kg722" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="registry-server" containerID="cri-o://efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" gracePeriod=2 Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.173194 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.281721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") pod \"282e6323-e597-4905-a7d7-f885b7eff305\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.281787 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") pod \"282e6323-e597-4905-a7d7-f885b7eff305\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.282054 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") pod \"282e6323-e597-4905-a7d7-f885b7eff305\" (UID: \"282e6323-e597-4905-a7d7-f885b7eff305\") " Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.282722 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities" (OuterVolumeSpecName: "utilities") pod "282e6323-e597-4905-a7d7-f885b7eff305" (UID: "282e6323-e597-4905-a7d7-f885b7eff305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.289200 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj" (OuterVolumeSpecName: "kube-api-access-2lxnj") pod "282e6323-e597-4905-a7d7-f885b7eff305" (UID: "282e6323-e597-4905-a7d7-f885b7eff305"). InnerVolumeSpecName "kube-api-access-2lxnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.315732 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "282e6323-e597-4905-a7d7-f885b7eff305" (UID: "282e6323-e597-4905-a7d7-f885b7eff305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.384725 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.384780 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282e6323-e597-4905-a7d7-f885b7eff305-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.384792 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lxnj\" (UniqueName: \"kubernetes.io/projected/282e6323-e597-4905-a7d7-f885b7eff305-kube-api-access-2lxnj\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.725414 4727 generic.go:334] "Generic (PLEG): container finished" podID="282e6323-e597-4905-a7d7-f885b7eff305" containerID="efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" exitCode=0 Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.725588 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg722" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.725622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerDied","Data":"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb"} Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.725983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg722" event={"ID":"282e6323-e597-4905-a7d7-f885b7eff305","Type":"ContainerDied","Data":"94dea0e522fa183593e3777105081b07c39058a2c243f7c9790b7aade563bd6c"} Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.726011 4727 scope.go:117] "RemoveContainer" containerID="efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.753896 4727 scope.go:117] "RemoveContainer" containerID="56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.770841 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.781197 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg722"] Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.796287 4727 scope.go:117] "RemoveContainer" containerID="011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.836114 4727 scope.go:117] "RemoveContainer" containerID="efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" Jan 09 11:18:43 crc kubenswrapper[4727]: E0109 11:18:43.837664 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb\": container with ID starting with efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb not found: ID does not exist" containerID="efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.837774 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb"} err="failed to get container status \"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb\": rpc error: code = NotFound desc = could not find container \"efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb\": container with ID starting with efb0b63fdfdd0ab13ffad0590b250e9be9a065191408d50779e786063db92cfb not found: ID does not exist" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.837829 4727 scope.go:117] "RemoveContainer" containerID="56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6" Jan 09 11:18:43 crc kubenswrapper[4727]: E0109 11:18:43.838259 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6\": container with ID starting with 56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6 not found: ID does not exist" containerID="56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.838293 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6"} err="failed to get container status \"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6\": rpc error: code = NotFound desc = could not find container \"56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6\": container with ID starting with 56bb6d52c9d987511537c62f6e5d648e1a5f529e48653836c4396341e5885cf6 not found: ID does not exist" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.838309 4727 scope.go:117] "RemoveContainer" containerID="011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804" Jan 09 11:18:43 crc kubenswrapper[4727]: E0109 11:18:43.838835 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804\": container with ID starting with 011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804 not found: ID does not exist" containerID="011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804" Jan 09 11:18:43 crc kubenswrapper[4727]: I0109 11:18:43.838867 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804"} err="failed to get container status \"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804\": rpc error: code = NotFound desc = could not find container \"011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804\": container with ID starting with 011223b564cf68b1d7cea14c48613cc63c6af4c10c4c2b7d0f496640324a6804 not found: ID does not exist" Jan 09 11:18:44 crc kubenswrapper[4727]: I0109 11:18:44.871618 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282e6323-e597-4905-a7d7-f885b7eff305" path="/var/lib/kubelet/pods/282e6323-e597-4905-a7d7-f885b7eff305/volumes" Jan 09 11:18:46 crc kubenswrapper[4727]: I0109 11:18:46.767843 4727 generic.go:334] "Generic (PLEG): container finished" podID="fc6114d6-7052-46b3-a8e5-c8b9731cc92c" containerID="24b8cb7c86256279d4e47319da3d87c1e5d0fd8bd60aa38b7566a705e7d9003f" exitCode=0 Jan 09 11:18:46 crc kubenswrapper[4727]: I0109 11:18:46.767923 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" event={"ID":"fc6114d6-7052-46b3-a8e5-c8b9731cc92c","Type":"ContainerDied","Data":"24b8cb7c86256279d4e47319da3d87c1e5d0fd8bd60aa38b7566a705e7d9003f"} Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.392072 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.452003 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.472793 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.545381 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") pod \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.545892 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") pod \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.545988 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") pod \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\" (UID: \"fc6114d6-7052-46b3-a8e5-c8b9731cc92c\") " Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.560259 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj" (OuterVolumeSpecName: "kube-api-access-5l2cj") pod "fc6114d6-7052-46b3-a8e5-c8b9731cc92c" (UID: "fc6114d6-7052-46b3-a8e5-c8b9731cc92c"). InnerVolumeSpecName "kube-api-access-5l2cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.578591 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory" (OuterVolumeSpecName: "inventory") pod "fc6114d6-7052-46b3-a8e5-c8b9731cc92c" (UID: "fc6114d6-7052-46b3-a8e5-c8b9731cc92c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.580834 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc6114d6-7052-46b3-a8e5-c8b9731cc92c" (UID: "fc6114d6-7052-46b3-a8e5-c8b9731cc92c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.649369 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.649420 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.649439 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l2cj\" (UniqueName: \"kubernetes.io/projected/fc6114d6-7052-46b3-a8e5-c8b9731cc92c-kube-api-access-5l2cj\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.801734 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.802795 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l88s" event={"ID":"fc6114d6-7052-46b3-a8e5-c8b9731cc92c","Type":"ContainerDied","Data":"34ab025612c8accc0a3d077ad7711b72fb3a0786386f472ea626ccc61d8251ab"} Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.802865 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ab025612c8accc0a3d077ad7711b72fb3a0786386f472ea626ccc61d8251ab" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.827943 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.912682 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9n6wb"] Jan 09 11:18:48 crc kubenswrapper[4727]: E0109 11:18:48.913262 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="extract-utilities" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913289 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="extract-utilities" Jan 09 11:18:48 crc kubenswrapper[4727]: E0109 11:18:48.913321 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="extract-content" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913330 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="extract-content" Jan 09 11:18:48 crc kubenswrapper[4727]: E0109 11:18:48.913348 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="registry-server" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913355 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="registry-server" Jan 09 11:18:48 crc kubenswrapper[4727]: E0109 11:18:48.913372 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc6114d6-7052-46b3-a8e5-c8b9731cc92c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913382 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc6114d6-7052-46b3-a8e5-c8b9731cc92c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913666 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc6114d6-7052-46b3-a8e5-c8b9731cc92c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.913703 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="282e6323-e597-4905-a7d7-f885b7eff305" containerName="registry-server" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.914677 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.916911 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.917390 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.918936 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.921365 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:18:48 crc kubenswrapper[4727]: I0109 11:18:48.924636 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9n6wb"] Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.060905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.060991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.061062 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.163693 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.163782 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.163853 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.169091 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.171201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.185019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") pod \"ssh-known-hosts-edpm-deployment-9n6wb\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.265175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.810501 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmhd8" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" containerID="cri-o://e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" gracePeriod=2 Jan 09 11:18:49 crc kubenswrapper[4727]: I0109 11:18:49.868797 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9n6wb"] Jan 09 11:18:49 crc kubenswrapper[4727]: W0109 11:18:49.879763 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod247ff33e_a764_4e75_9d54_2c45ae8d8ca7.slice/crio-9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a WatchSource:0}: Error finding container 9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a: Status 404 returned error can't find the container with id 9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.364097 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.498778 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") pod \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.498884 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") pod \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.498943 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") pod \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\" (UID: \"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd\") " Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.499772 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities" (OuterVolumeSpecName: "utilities") pod "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" (UID: "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.506848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf" (OuterVolumeSpecName: "kube-api-access-d5kbf") pod "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" (UID: "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd"). InnerVolumeSpecName "kube-api-access-d5kbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.601538 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5kbf\" (UniqueName: \"kubernetes.io/projected/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-kube-api-access-d5kbf\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.602220 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.627563 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" (UID: "57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.704054 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823784 4727 generic.go:334] "Generic (PLEG): container finished" podID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerID="e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" exitCode=0 Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823873 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerDied","Data":"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67"} Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823922 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmhd8" event={"ID":"57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd","Type":"ContainerDied","Data":"4296e666b17eebb6a3607981a9d50b3da27e937ddf21b913930259e0af4c499e"} Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823919 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmhd8" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.823946 4727 scope.go:117] "RemoveContainer" containerID="e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.826049 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" event={"ID":"247ff33e-a764-4e75-9d54-2c45ae8d8ca7","Type":"ContainerStarted","Data":"721dfd54ebdaaf992a29619a2bdfaf035cdad7bf634052a03310ce06e2b9eb98"} Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.826110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" event={"ID":"247ff33e-a764-4e75-9d54-2c45ae8d8ca7","Type":"ContainerStarted","Data":"9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a"} Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.854497 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" podStartSLOduration=2.210729444 podStartE2EDuration="2.854471821s" podCreationTimestamp="2026-01-09 11:18:48 +0000 UTC" firstStartedPulling="2026-01-09 11:18:49.886882368 +0000 UTC m=+1975.336787149" lastFinishedPulling="2026-01-09 11:18:50.530624745 +0000 UTC m=+1975.980529526" observedRunningTime="2026-01-09 11:18:50.845406681 +0000 UTC m=+1976.295311472" watchObservedRunningTime="2026-01-09 11:18:50.854471821 +0000 UTC m=+1976.304376602" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.856807 4727 scope.go:117] "RemoveContainer" containerID="faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.897402 4727 scope.go:117] "RemoveContainer" containerID="9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.924402 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.924457 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmhd8"] Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.934959 4727 scope.go:117] "RemoveContainer" containerID="e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" Jan 09 11:18:50 crc kubenswrapper[4727]: E0109 11:18:50.936260 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67\": container with ID starting with e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67 not found: ID does not exist" containerID="e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.936346 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67"} err="failed to get container status \"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67\": rpc error: code = NotFound desc = could not find container \"e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67\": container with ID starting with e982f54d1c037a3e0ceac5440b3c8195c277868ba8edba60012c30f3bddeaa67 not found: ID does not exist" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.936398 4727 scope.go:117] "RemoveContainer" containerID="faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2" Jan 09 11:18:50 crc kubenswrapper[4727]: E0109 11:18:50.936929 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2\": container with ID starting with faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2 not found: ID does not exist" containerID="faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.937041 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2"} err="failed to get container status \"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2\": rpc error: code = NotFound desc = could not find container \"faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2\": container with ID starting with faddb0c6b374d49ef7711e2cb63d1e089a7181ad2795307d0573279ec31277b2 not found: ID does not exist" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.937129 4727 scope.go:117] "RemoveContainer" containerID="9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1" Jan 09 11:18:50 crc kubenswrapper[4727]: E0109 11:18:50.937692 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1\": container with ID starting with 9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1 not found: ID does not exist" containerID="9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1" Jan 09 11:18:50 crc kubenswrapper[4727]: I0109 11:18:50.937720 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1"} err="failed to get container status \"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1\": rpc error: code = NotFound desc = could not find container \"9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1\": container with ID starting with 9c910264dd18a3682deafd0926e5c7951f1b16844b235a27e29c99a87630fbb1 not found: ID does not exist" Jan 09 11:18:52 crc kubenswrapper[4727]: I0109 11:18:52.881311 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" path="/var/lib/kubelet/pods/57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd/volumes" Jan 09 11:18:58 crc kubenswrapper[4727]: I0109 11:18:58.904712 4727 generic.go:334] "Generic (PLEG): container finished" podID="247ff33e-a764-4e75-9d54-2c45ae8d8ca7" containerID="721dfd54ebdaaf992a29619a2bdfaf035cdad7bf634052a03310ce06e2b9eb98" exitCode=0 Jan 09 11:18:58 crc kubenswrapper[4727]: I0109 11:18:58.905485 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" event={"ID":"247ff33e-a764-4e75-9d54-2c45ae8d8ca7","Type":"ContainerDied","Data":"721dfd54ebdaaf992a29619a2bdfaf035cdad7bf634052a03310ce06e2b9eb98"} Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.343968 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.448117 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") pod \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.448246 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") pod \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.448376 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") pod \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\" (UID: \"247ff33e-a764-4e75-9d54-2c45ae8d8ca7\") " Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.456679 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w" (OuterVolumeSpecName: "kube-api-access-f9v2w") pod "247ff33e-a764-4e75-9d54-2c45ae8d8ca7" (UID: "247ff33e-a764-4e75-9d54-2c45ae8d8ca7"). InnerVolumeSpecName "kube-api-access-f9v2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.481750 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "247ff33e-a764-4e75-9d54-2c45ae8d8ca7" (UID: "247ff33e-a764-4e75-9d54-2c45ae8d8ca7"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.484795 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "247ff33e-a764-4e75-9d54-2c45ae8d8ca7" (UID: "247ff33e-a764-4e75-9d54-2c45ae8d8ca7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.552012 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.552112 4727 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.552153 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9v2w\" (UniqueName: \"kubernetes.io/projected/247ff33e-a764-4e75-9d54-2c45ae8d8ca7-kube-api-access-f9v2w\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.926866 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" event={"ID":"247ff33e-a764-4e75-9d54-2c45ae8d8ca7","Type":"ContainerDied","Data":"9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a"} Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.927361 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9165bfea1c5549f68fb8931fa997dd78cd2988bd506e9d1a13bc04d45099099a" Jan 09 11:19:00 crc kubenswrapper[4727]: I0109 11:19:00.927451 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9n6wb" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.030602 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg"] Jan 09 11:19:01 crc kubenswrapper[4727]: E0109 11:19:01.031193 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="extract-content" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.031215 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="extract-content" Jan 09 11:19:01 crc kubenswrapper[4727]: E0109 11:19:01.031256 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247ff33e-a764-4e75-9d54-2c45ae8d8ca7" containerName="ssh-known-hosts-edpm-deployment" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.031264 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="247ff33e-a764-4e75-9d54-2c45ae8d8ca7" containerName="ssh-known-hosts-edpm-deployment" Jan 09 11:19:01 crc kubenswrapper[4727]: E0109 11:19:01.031284 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.031291 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" Jan 09 11:19:01 crc kubenswrapper[4727]: E0109 11:19:01.031299 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="extract-utilities" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.031305 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="extract-utilities" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.043009 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f2ab3d-5ee2-4f66-9166-b9bd89cc5fdd" containerName="registry-server" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.043058 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="247ff33e-a764-4e75-9d54-2c45ae8d8ca7" containerName="ssh-known-hosts-edpm-deployment" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.043855 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg"] Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.043969 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.047233 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.047892 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.047984 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.049285 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.168359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.168645 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.168692 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.271351 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.271425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.272700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.277240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.284450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.290981 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-27qwg\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.363806 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:01 crc kubenswrapper[4727]: I0109 11:19:01.946122 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg"] Jan 09 11:19:02 crc kubenswrapper[4727]: I0109 11:19:02.959852 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" event={"ID":"6f717d58-9e42-4359-89e8-70a60345d546","Type":"ContainerStarted","Data":"76307acac973029acf1ea70c3750a8c8d87c1fc0eae9ae367b63617b0247502e"} Jan 09 11:19:03 crc kubenswrapper[4727]: I0109 11:19:03.971158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" event={"ID":"6f717d58-9e42-4359-89e8-70a60345d546","Type":"ContainerStarted","Data":"b26baaff3461f4a0d9e23e0a86fe29bb590cb12134075b57ba0420af5c684ab7"} Jan 09 11:19:03 crc kubenswrapper[4727]: I0109 11:19:03.995368 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" podStartSLOduration=2.0832852060000002 podStartE2EDuration="2.995340597s" podCreationTimestamp="2026-01-09 11:19:01 +0000 UTC" firstStartedPulling="2026-01-09 11:19:01.945561341 +0000 UTC m=+1987.395466112" lastFinishedPulling="2026-01-09 11:19:02.857616722 +0000 UTC m=+1988.307521503" observedRunningTime="2026-01-09 11:19:03.990100408 +0000 UTC m=+1989.440005189" watchObservedRunningTime="2026-01-09 11:19:03.995340597 +0000 UTC m=+1989.445245388" Jan 09 11:19:12 crc kubenswrapper[4727]: I0109 11:19:12.045440 4727 generic.go:334] "Generic (PLEG): container finished" podID="6f717d58-9e42-4359-89e8-70a60345d546" containerID="b26baaff3461f4a0d9e23e0a86fe29bb590cb12134075b57ba0420af5c684ab7" exitCode=0 Jan 09 11:19:12 crc kubenswrapper[4727]: I0109 11:19:12.045559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" event={"ID":"6f717d58-9e42-4359-89e8-70a60345d546","Type":"ContainerDied","Data":"b26baaff3461f4a0d9e23e0a86fe29bb590cb12134075b57ba0420af5c684ab7"} Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.491122 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.587341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") pod \"6f717d58-9e42-4359-89e8-70a60345d546\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.587540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") pod \"6f717d58-9e42-4359-89e8-70a60345d546\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.587675 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") pod \"6f717d58-9e42-4359-89e8-70a60345d546\" (UID: \"6f717d58-9e42-4359-89e8-70a60345d546\") " Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.595641 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx" (OuterVolumeSpecName: "kube-api-access-fb8bx") pod "6f717d58-9e42-4359-89e8-70a60345d546" (UID: "6f717d58-9e42-4359-89e8-70a60345d546"). InnerVolumeSpecName "kube-api-access-fb8bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.619672 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory" (OuterVolumeSpecName: "inventory") pod "6f717d58-9e42-4359-89e8-70a60345d546" (UID: "6f717d58-9e42-4359-89e8-70a60345d546"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.623085 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6f717d58-9e42-4359-89e8-70a60345d546" (UID: "6f717d58-9e42-4359-89e8-70a60345d546"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.691318 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.691682 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f717d58-9e42-4359-89e8-70a60345d546-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:13 crc kubenswrapper[4727]: I0109 11:19:13.691759 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb8bx\" (UniqueName: \"kubernetes.io/projected/6f717d58-9e42-4359-89e8-70a60345d546-kube-api-access-fb8bx\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.066772 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" event={"ID":"6f717d58-9e42-4359-89e8-70a60345d546","Type":"ContainerDied","Data":"76307acac973029acf1ea70c3750a8c8d87c1fc0eae9ae367b63617b0247502e"} Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.066821 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76307acac973029acf1ea70c3750a8c8d87c1fc0eae9ae367b63617b0247502e" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.066901 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-27qwg" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.153173 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd"] Jan 09 11:19:14 crc kubenswrapper[4727]: E0109 11:19:14.153796 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f717d58-9e42-4359-89e8-70a60345d546" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.153819 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f717d58-9e42-4359-89e8-70a60345d546" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.154055 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f717d58-9e42-4359-89e8-70a60345d546" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.154884 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.163549 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd"] Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.165875 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.165992 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.166101 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.166187 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.203793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.203928 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.204020 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: E0109 11:19:14.297719 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f717d58_9e42_4359_89e8_70a60345d546.slice/crio-76307acac973029acf1ea70c3750a8c8d87c1fc0eae9ae367b63617b0247502e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f717d58_9e42_4359_89e8_70a60345d546.slice\": RecentStats: unable to find data in memory cache]" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.306338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.306499 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.306618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.314078 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.316053 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.329372 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:14 crc kubenswrapper[4727]: I0109 11:19:14.491113 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:15 crc kubenswrapper[4727]: I0109 11:19:15.097642 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd"] Jan 09 11:19:15 crc kubenswrapper[4727]: I0109 11:19:15.105891 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:19:16 crc kubenswrapper[4727]: I0109 11:19:16.090041 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" event={"ID":"72a53995-d5d0-4795-a1c7-f8a570a0ff6a","Type":"ContainerStarted","Data":"9c3e5c27749a5c29de930643a249290a798e51772386006a26d6344c344a1772"} Jan 09 11:19:17 crc kubenswrapper[4727]: I0109 11:19:17.101683 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" event={"ID":"72a53995-d5d0-4795-a1c7-f8a570a0ff6a","Type":"ContainerStarted","Data":"5f83596c1e469c63ef0e98d3e7a5155782419cbcd0d1d7c8568ad4945944466c"} Jan 09 11:19:17 crc kubenswrapper[4727]: I0109 11:19:17.131340 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" podStartSLOduration=2.211464595 podStartE2EDuration="3.131313803s" podCreationTimestamp="2026-01-09 11:19:14 +0000 UTC" firstStartedPulling="2026-01-09 11:19:15.105675547 +0000 UTC m=+2000.555580328" lastFinishedPulling="2026-01-09 11:19:16.025524755 +0000 UTC m=+2001.475429536" observedRunningTime="2026-01-09 11:19:17.124083811 +0000 UTC m=+2002.573988592" watchObservedRunningTime="2026-01-09 11:19:17.131313803 +0000 UTC m=+2002.581218584" Jan 09 11:19:27 crc kubenswrapper[4727]: I0109 11:19:27.204646 4727 generic.go:334] "Generic (PLEG): container finished" podID="72a53995-d5d0-4795-a1c7-f8a570a0ff6a" containerID="5f83596c1e469c63ef0e98d3e7a5155782419cbcd0d1d7c8568ad4945944466c" exitCode=0 Jan 09 11:19:27 crc kubenswrapper[4727]: I0109 11:19:27.205537 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" event={"ID":"72a53995-d5d0-4795-a1c7-f8a570a0ff6a","Type":"ContainerDied","Data":"5f83596c1e469c63ef0e98d3e7a5155782419cbcd0d1d7c8568ad4945944466c"} Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.660857 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.782579 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") pod \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.782676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") pod \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.782880 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") pod \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\" (UID: \"72a53995-d5d0-4795-a1c7-f8a570a0ff6a\") " Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.790364 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz" (OuterVolumeSpecName: "kube-api-access-q2grz") pod "72a53995-d5d0-4795-a1c7-f8a570a0ff6a" (UID: "72a53995-d5d0-4795-a1c7-f8a570a0ff6a"). InnerVolumeSpecName "kube-api-access-q2grz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.815454 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory" (OuterVolumeSpecName: "inventory") pod "72a53995-d5d0-4795-a1c7-f8a570a0ff6a" (UID: "72a53995-d5d0-4795-a1c7-f8a570a0ff6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.815839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72a53995-d5d0-4795-a1c7-f8a570a0ff6a" (UID: "72a53995-d5d0-4795-a1c7-f8a570a0ff6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.886469 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.886529 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:28 crc kubenswrapper[4727]: I0109 11:19:28.886545 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2grz\" (UniqueName: \"kubernetes.io/projected/72a53995-d5d0-4795-a1c7-f8a570a0ff6a-kube-api-access-q2grz\") on node \"crc\" DevicePath \"\"" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.225840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" event={"ID":"72a53995-d5d0-4795-a1c7-f8a570a0ff6a","Type":"ContainerDied","Data":"9c3e5c27749a5c29de930643a249290a798e51772386006a26d6344c344a1772"} Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.225905 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3e5c27749a5c29de930643a249290a798e51772386006a26d6344c344a1772" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.225917 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.347751 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9"] Jan 09 11:19:29 crc kubenswrapper[4727]: E0109 11:19:29.348830 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a53995-d5d0-4795-a1c7-f8a570a0ff6a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.348856 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a53995-d5d0-4795-a1c7-f8a570a0ff6a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.349130 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a53995-d5d0-4795-a1c7-f8a570a0ff6a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.350102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.352789 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.352789 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.355600 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.355828 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.355855 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.355982 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.356196 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.357490 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.358593 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9"] Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.499981 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500034 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500108 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500198 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500236 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500273 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500521 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.500991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501354 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501460 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.501579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603646 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603734 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603820 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603865 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603917 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603951 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.603990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604021 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604048 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604067 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604092 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604161 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.604191 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.611074 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.611133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.612122 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.613621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.613663 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.613994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.615873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.617141 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.618184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.618347 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.618731 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.619223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.619471 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.626620 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qplw9\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:29 crc kubenswrapper[4727]: I0109 11:19:29.683854 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:19:30 crc kubenswrapper[4727]: I0109 11:19:30.237314 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9"] Jan 09 11:19:31 crc kubenswrapper[4727]: I0109 11:19:31.249984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" event={"ID":"a4f9d22c-83b0-4c0c-95e3-a2b2937908db","Type":"ContainerStarted","Data":"b284f99069e94bd8e39b291ab4f4ab645d853c164b98792b5677381efef6064e"} Jan 09 11:19:32 crc kubenswrapper[4727]: I0109 11:19:32.262986 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" event={"ID":"a4f9d22c-83b0-4c0c-95e3-a2b2937908db","Type":"ContainerStarted","Data":"6ed6b623442e77a1da46af05fa2bcea2b99c6d0df048d1d6d510c677429ea804"} Jan 09 11:19:32 crc kubenswrapper[4727]: I0109 11:19:32.293802 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" podStartSLOduration=2.410821419 podStartE2EDuration="3.293782188s" podCreationTimestamp="2026-01-09 11:19:29 +0000 UTC" firstStartedPulling="2026-01-09 11:19:30.243811197 +0000 UTC m=+2015.693715978" lastFinishedPulling="2026-01-09 11:19:31.126771956 +0000 UTC m=+2016.576676747" observedRunningTime="2026-01-09 11:19:32.289012841 +0000 UTC m=+2017.738917662" watchObservedRunningTime="2026-01-09 11:19:32.293782188 +0000 UTC m=+2017.743686969" Jan 09 11:19:39 crc kubenswrapper[4727]: I0109 11:19:39.404970 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:19:39 crc kubenswrapper[4727]: I0109 11:19:39.405908 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:20:09 crc kubenswrapper[4727]: I0109 11:20:09.405707 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:20:09 crc kubenswrapper[4727]: I0109 11:20:09.406433 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:20:10 crc kubenswrapper[4727]: I0109 11:20:10.662460 4727 generic.go:334] "Generic (PLEG): container finished" podID="a4f9d22c-83b0-4c0c-95e3-a2b2937908db" containerID="6ed6b623442e77a1da46af05fa2bcea2b99c6d0df048d1d6d510c677429ea804" exitCode=0 Jan 09 11:20:10 crc kubenswrapper[4727]: I0109 11:20:10.662556 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" event={"ID":"a4f9d22c-83b0-4c0c-95e3-a2b2937908db","Type":"ContainerDied","Data":"6ed6b623442e77a1da46af05fa2bcea2b99c6d0df048d1d6d510c677429ea804"} Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.138746 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.141703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.141883 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.141952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.142157 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143116 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143585 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143620 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143753 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143809 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.143935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\" (UID: \"a4f9d22c-83b0-4c0c-95e3-a2b2937908db\") " Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.153535 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.153926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.154979 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155170 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155133 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2" (OuterVolumeSpecName: "kube-api-access-7bmj2") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "kube-api-access-7bmj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155239 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155263 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.155945 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.157419 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.157649 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.162780 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.170723 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.226274 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.226316 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory" (OuterVolumeSpecName: "inventory") pod "a4f9d22c-83b0-4c0c-95e3-a2b2937908db" (UID: "a4f9d22c-83b0-4c0c-95e3-a2b2937908db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247683 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247736 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247754 4727 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247770 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247783 4727 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247796 4727 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247810 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bmj2\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-kube-api-access-7bmj2\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247824 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247836 4727 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247848 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247862 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247876 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247889 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.247903 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f9d22c-83b0-4c0c-95e3-a2b2937908db-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.698281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" event={"ID":"a4f9d22c-83b0-4c0c-95e3-a2b2937908db","Type":"ContainerDied","Data":"b284f99069e94bd8e39b291ab4f4ab645d853c164b98792b5677381efef6064e"} Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.699081 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b284f99069e94bd8e39b291ab4f4ab645d853c164b98792b5677381efef6064e" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.698379 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qplw9" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.808782 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm"] Jan 09 11:20:12 crc kubenswrapper[4727]: E0109 11:20:12.809427 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f9d22c-83b0-4c0c-95e3-a2b2937908db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.809456 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f9d22c-83b0-4c0c-95e3-a2b2937908db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.810294 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f9d22c-83b0-4c0c-95e3-a2b2937908db" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.811289 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.816352 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.816447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.820409 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.820426 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.820571 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.831713 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm"] Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860189 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860264 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860306 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.860386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961783 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961859 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.961981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.963605 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.967187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.967233 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.967669 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:12 crc kubenswrapper[4727]: I0109 11:20:12.980306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rhzcm\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:13 crc kubenswrapper[4727]: I0109 11:20:13.135531 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:20:13 crc kubenswrapper[4727]: I0109 11:20:13.525481 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm"] Jan 09 11:20:13 crc kubenswrapper[4727]: I0109 11:20:13.711527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" event={"ID":"5ebde73e-573e-4b52-b779-dd3cd03761e0","Type":"ContainerStarted","Data":"b3fec9ce625c04eecefc526e66fe07c8ef5f1f066415dfc8184f8ca354b3bf7d"} Jan 09 11:20:14 crc kubenswrapper[4727]: I0109 11:20:14.727061 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" event={"ID":"5ebde73e-573e-4b52-b779-dd3cd03761e0","Type":"ContainerStarted","Data":"71ddd9fdf4a470173413312cb828e861c44bb5121021ea88ef19eced9d9cb7bf"} Jan 09 11:20:14 crc kubenswrapper[4727]: I0109 11:20:14.749036 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" podStartSLOduration=2.247235573 podStartE2EDuration="2.749005987s" podCreationTimestamp="2026-01-09 11:20:12 +0000 UTC" firstStartedPulling="2026-01-09 11:20:13.529900634 +0000 UTC m=+2058.979805415" lastFinishedPulling="2026-01-09 11:20:14.031671058 +0000 UTC m=+2059.481575829" observedRunningTime="2026-01-09 11:20:14.746826489 +0000 UTC m=+2060.196731310" watchObservedRunningTime="2026-01-09 11:20:14.749005987 +0000 UTC m=+2060.198910778" Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.405294 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.406154 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.406240 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.407285 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.407357 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd" gracePeriod=600 Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.997846 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd" exitCode=0 Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.997903 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd"} Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.998304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82"} Jan 09 11:20:39 crc kubenswrapper[4727]: I0109 11:20:39.998347 4727 scope.go:117] "RemoveContainer" containerID="8791446404b609175741eaa84893184676de694fd053f56099868e80c8474019" Jan 09 11:21:20 crc kubenswrapper[4727]: I0109 11:21:20.462224 4727 generic.go:334] "Generic (PLEG): container finished" podID="5ebde73e-573e-4b52-b779-dd3cd03761e0" containerID="71ddd9fdf4a470173413312cb828e861c44bb5121021ea88ef19eced9d9cb7bf" exitCode=0 Jan 09 11:21:20 crc kubenswrapper[4727]: I0109 11:21:20.462322 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" event={"ID":"5ebde73e-573e-4b52-b779-dd3cd03761e0","Type":"ContainerDied","Data":"71ddd9fdf4a470173413312cb828e861c44bb5121021ea88ef19eced9d9cb7bf"} Jan 09 11:21:21 crc kubenswrapper[4727]: I0109 11:21:21.972545 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.074831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.075328 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.075702 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.075838 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.076763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") pod \"5ebde73e-573e-4b52-b779-dd3cd03761e0\" (UID: \"5ebde73e-573e-4b52-b779-dd3cd03761e0\") " Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.082958 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl" (OuterVolumeSpecName: "kube-api-access-cx5hl") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "kube-api-access-cx5hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.084833 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.125852 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.126402 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory" (OuterVolumeSpecName: "inventory") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.144340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "5ebde73e-573e-4b52-b779-dd3cd03761e0" (UID: "5ebde73e-573e-4b52-b779-dd3cd03761e0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182002 4727 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182635 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182737 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182854 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ebde73e-573e-4b52-b779-dd3cd03761e0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.182934 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5hl\" (UniqueName: \"kubernetes.io/projected/5ebde73e-573e-4b52-b779-dd3cd03761e0-kube-api-access-cx5hl\") on node \"crc\" DevicePath \"\"" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.485839 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" event={"ID":"5ebde73e-573e-4b52-b779-dd3cd03761e0","Type":"ContainerDied","Data":"b3fec9ce625c04eecefc526e66fe07c8ef5f1f066415dfc8184f8ca354b3bf7d"} Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.485879 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rhzcm" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.485891 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3fec9ce625c04eecefc526e66fe07c8ef5f1f066415dfc8184f8ca354b3bf7d" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.599536 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82"] Jan 09 11:21:22 crc kubenswrapper[4727]: E0109 11:21:22.599984 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebde73e-573e-4b52-b779-dd3cd03761e0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.600003 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebde73e-573e-4b52-b779-dd3cd03761e0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.600193 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebde73e-573e-4b52-b779-dd3cd03761e0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.600866 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.613467 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.613818 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.613989 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.614126 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.614648 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.614928 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.615913 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82"] Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692283 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692399 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692460 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.692541 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795785 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795840 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.795894 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.801386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.801465 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.801471 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.801642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.807305 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.815429 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:22 crc kubenswrapper[4727]: I0109 11:21:22.941130 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:21:23 crc kubenswrapper[4727]: I0109 11:21:23.516106 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82"] Jan 09 11:21:24 crc kubenswrapper[4727]: I0109 11:21:24.512080 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" event={"ID":"92bbfcf1-befd-42df-a532-97f9a3bd22d0","Type":"ContainerStarted","Data":"afa19bd0290bcc947a157dfed1f40ca5489236d6b5f1ccbce8ce6fcc5af45edf"} Jan 09 11:21:25 crc kubenswrapper[4727]: I0109 11:21:25.543465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" event={"ID":"92bbfcf1-befd-42df-a532-97f9a3bd22d0","Type":"ContainerStarted","Data":"917763738c78c07dc56b747fa98f3e04970c051a5fb817aef84285f08efb7048"} Jan 09 11:21:25 crc kubenswrapper[4727]: I0109 11:21:25.571229 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" podStartSLOduration=2.577331133 podStartE2EDuration="3.571194926s" podCreationTimestamp="2026-01-09 11:21:22 +0000 UTC" firstStartedPulling="2026-01-09 11:21:23.536390361 +0000 UTC m=+2128.986295162" lastFinishedPulling="2026-01-09 11:21:24.530254174 +0000 UTC m=+2129.980158955" observedRunningTime="2026-01-09 11:21:25.567532941 +0000 UTC m=+2131.017437732" watchObservedRunningTime="2026-01-09 11:21:25.571194926 +0000 UTC m=+2131.021099727" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.342798 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.346093 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.354848 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.467417 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.467532 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.467788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.570542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.570629 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.570688 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.571248 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.571411 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.594717 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") pod \"certified-operators-l6sq5\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:07 crc kubenswrapper[4727]: I0109 11:22:07.672288 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:08 crc kubenswrapper[4727]: I0109 11:22:08.422203 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:08 crc kubenswrapper[4727]: E0109 11:22:08.854926 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b88c83_24e9_4f37_9671_0dc9d8c1abf1.slice/crio-3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b88c83_24e9_4f37_9671_0dc9d8c1abf1.slice/crio-conmon-3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:22:09 crc kubenswrapper[4727]: I0109 11:22:09.002329 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerID="3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f" exitCode=0 Jan 09 11:22:09 crc kubenswrapper[4727]: I0109 11:22:09.002443 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerDied","Data":"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f"} Jan 09 11:22:09 crc kubenswrapper[4727]: I0109 11:22:09.002892 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerStarted","Data":"0046b42f8e401447fa0ab1dc80943ca94acdf238cb3e97f8bdcadcde73dae8cd"} Jan 09 11:22:10 crc kubenswrapper[4727]: I0109 11:22:10.017857 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerStarted","Data":"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d"} Jan 09 11:22:11 crc kubenswrapper[4727]: I0109 11:22:11.032210 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerID="a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d" exitCode=0 Jan 09 11:22:11 crc kubenswrapper[4727]: I0109 11:22:11.032312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerDied","Data":"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d"} Jan 09 11:22:12 crc kubenswrapper[4727]: I0109 11:22:12.045981 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerStarted","Data":"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd"} Jan 09 11:22:12 crc kubenswrapper[4727]: I0109 11:22:12.070844 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l6sq5" podStartSLOduration=2.561301109 podStartE2EDuration="5.070819451s" podCreationTimestamp="2026-01-09 11:22:07 +0000 UTC" firstStartedPulling="2026-01-09 11:22:09.005632526 +0000 UTC m=+2174.455537307" lastFinishedPulling="2026-01-09 11:22:11.515150868 +0000 UTC m=+2176.965055649" observedRunningTime="2026-01-09 11:22:12.066950471 +0000 UTC m=+2177.516855252" watchObservedRunningTime="2026-01-09 11:22:12.070819451 +0000 UTC m=+2177.520724232" Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.112311 4727 generic.go:334] "Generic (PLEG): container finished" podID="92bbfcf1-befd-42df-a532-97f9a3bd22d0" containerID="917763738c78c07dc56b747fa98f3e04970c051a5fb817aef84285f08efb7048" exitCode=0 Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.112450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" event={"ID":"92bbfcf1-befd-42df-a532-97f9a3bd22d0","Type":"ContainerDied","Data":"917763738c78c07dc56b747fa98f3e04970c051a5fb817aef84285f08efb7048"} Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.673114 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.673183 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:17 crc kubenswrapper[4727]: I0109 11:22:17.750873 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.188979 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.638557 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.735860 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736445 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736583 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736664 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.736763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\" (UID: \"92bbfcf1-befd-42df-a532-97f9a3bd22d0\") " Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.743187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.744320 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p" (OuterVolumeSpecName: "kube-api-access-7w49p") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "kube-api-access-7w49p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.769719 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.770299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.772806 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.773298 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory" (OuterVolumeSpecName: "inventory") pod "92bbfcf1-befd-42df-a532-97f9a3bd22d0" (UID: "92bbfcf1-befd-42df-a532-97f9a3bd22d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840203 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840251 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840267 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840282 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840298 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w49p\" (UniqueName: \"kubernetes.io/projected/92bbfcf1-befd-42df-a532-97f9a3bd22d0-kube-api-access-7w49p\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:18 crc kubenswrapper[4727]: I0109 11:22:18.840311 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bbfcf1-befd-42df-a532-97f9a3bd22d0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.138033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" event={"ID":"92bbfcf1-befd-42df-a532-97f9a3bd22d0","Type":"ContainerDied","Data":"afa19bd0290bcc947a157dfed1f40ca5489236d6b5f1ccbce8ce6fcc5af45edf"} Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.138071 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.138105 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa19bd0290bcc947a157dfed1f40ca5489236d6b5f1ccbce8ce6fcc5af45edf" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.268540 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v"] Jan 09 11:22:19 crc kubenswrapper[4727]: E0109 11:22:19.269219 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bbfcf1-befd-42df-a532-97f9a3bd22d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.269241 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bbfcf1-befd-42df-a532-97f9a3bd22d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.269609 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bbfcf1-befd-42df-a532-97f9a3bd22d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.270532 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276107 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276370 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276367 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276577 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.276675 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.282643 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v"] Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349244 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.349561 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452534 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452677 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452719 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452754 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.452816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.459138 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.459412 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.460200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.463032 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.476919 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zs24v\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.603966 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:22:19 crc kubenswrapper[4727]: I0109 11:22:19.914034 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.146300 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l6sq5" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="registry-server" containerID="cri-o://324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" gracePeriod=2 Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.185181 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v"] Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.524107 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.680045 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") pod \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.680110 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") pod \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.680298 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") pod \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\" (UID: \"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1\") " Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.681538 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities" (OuterVolumeSpecName: "utilities") pod "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" (UID: "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.689008 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl" (OuterVolumeSpecName: "kube-api-access-kcqrl") pod "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" (UID: "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1"). InnerVolumeSpecName "kube-api-access-kcqrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.736874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" (UID: "b1b88c83-24e9-4f37-9671-0dc9d8c1abf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.783213 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.784158 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:20 crc kubenswrapper[4727]: I0109 11:22:20.784186 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1-kube-api-access-kcqrl\") on node \"crc\" DevicePath \"\"" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179670 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerID="324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" exitCode=0 Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerDied","Data":"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd"} Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179793 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l6sq5" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l6sq5" event={"ID":"b1b88c83-24e9-4f37-9671-0dc9d8c1abf1","Type":"ContainerDied","Data":"0046b42f8e401447fa0ab1dc80943ca94acdf238cb3e97f8bdcadcde73dae8cd"} Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.179841 4727 scope.go:117] "RemoveContainer" containerID="324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.181942 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" event={"ID":"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5","Type":"ContainerStarted","Data":"6f49ac9c9911a0566289b8031b75d8ac26fc7bc544ef7b7da479b4fe3906f46a"} Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.223814 4727 scope.go:117] "RemoveContainer" containerID="a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.232546 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.241606 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l6sq5"] Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.245303 4727 scope.go:117] "RemoveContainer" containerID="3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.269644 4727 scope.go:117] "RemoveContainer" containerID="324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" Jan 09 11:22:21 crc kubenswrapper[4727]: E0109 11:22:21.270217 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd\": container with ID starting with 324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd not found: ID does not exist" containerID="324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.270263 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd"} err="failed to get container status \"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd\": rpc error: code = NotFound desc = could not find container \"324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd\": container with ID starting with 324f6fc2914ee71852a4da83eaffe85b42a849069484fdc8c2772ee589aa29dd not found: ID does not exist" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.270297 4727 scope.go:117] "RemoveContainer" containerID="a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d" Jan 09 11:22:21 crc kubenswrapper[4727]: E0109 11:22:21.270942 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d\": container with ID starting with a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d not found: ID does not exist" containerID="a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.271112 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d"} err="failed to get container status \"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d\": rpc error: code = NotFound desc = could not find container \"a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d\": container with ID starting with a195c4719d04593f4d425d00fdd8614f41a465e26ccd10469bf711468646505d not found: ID does not exist" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.271225 4727 scope.go:117] "RemoveContainer" containerID="3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f" Jan 09 11:22:21 crc kubenswrapper[4727]: E0109 11:22:21.272018 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f\": container with ID starting with 3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f not found: ID does not exist" containerID="3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f" Jan 09 11:22:21 crc kubenswrapper[4727]: I0109 11:22:21.272085 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f"} err="failed to get container status \"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f\": rpc error: code = NotFound desc = could not find container \"3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f\": container with ID starting with 3468701bcdb8fe82995aac7f47b02797d355e3683b52786fa3dc779df728249f not found: ID does not exist" Jan 09 11:22:22 crc kubenswrapper[4727]: I0109 11:22:22.194771 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" event={"ID":"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5","Type":"ContainerStarted","Data":"c8c2f367edb0664189b6ee0a5ac5f8874637772a39b40812888801e33cc22027"} Jan 09 11:22:22 crc kubenswrapper[4727]: I0109 11:22:22.224083 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" podStartSLOduration=2.394051873 podStartE2EDuration="3.224058282s" podCreationTimestamp="2026-01-09 11:22:19 +0000 UTC" firstStartedPulling="2026-01-09 11:22:20.183653701 +0000 UTC m=+2185.633558482" lastFinishedPulling="2026-01-09 11:22:21.0136601 +0000 UTC m=+2186.463564891" observedRunningTime="2026-01-09 11:22:22.212232516 +0000 UTC m=+2187.662137297" watchObservedRunningTime="2026-01-09 11:22:22.224058282 +0000 UTC m=+2187.673963063" Jan 09 11:22:22 crc kubenswrapper[4727]: I0109 11:22:22.878997 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" path="/var/lib/kubelet/pods/b1b88c83-24e9-4f37-9671-0dc9d8c1abf1/volumes" Jan 09 11:22:39 crc kubenswrapper[4727]: I0109 11:22:39.405342 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:22:39 crc kubenswrapper[4727]: I0109 11:22:39.406421 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.576715 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:22:55 crc kubenswrapper[4727]: E0109 11:22:55.578252 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="registry-server" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.578274 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="registry-server" Jan 09 11:22:55 crc kubenswrapper[4727]: E0109 11:22:55.578292 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="extract-utilities" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.578301 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="extract-utilities" Jan 09 11:22:55 crc kubenswrapper[4727]: E0109 11:22:55.578350 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="extract-content" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.578358 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="extract-content" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.578648 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b88c83-24e9-4f37-9671-0dc9d8c1abf1" containerName="registry-server" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.580371 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.586728 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.695834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.695911 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.696001 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.798730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.798938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.799075 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.799439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.799631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.827943 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") pod \"community-operators-ssm76\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:55 crc kubenswrapper[4727]: I0109 11:22:55.950352 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:22:56 crc kubenswrapper[4727]: I0109 11:22:56.577071 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:22:57 crc kubenswrapper[4727]: I0109 11:22:57.566888 4727 generic.go:334] "Generic (PLEG): container finished" podID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerID="2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3" exitCode=0 Jan 09 11:22:57 crc kubenswrapper[4727]: I0109 11:22:57.566944 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerDied","Data":"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3"} Jan 09 11:22:57 crc kubenswrapper[4727]: I0109 11:22:57.567381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerStarted","Data":"727a60c621ddad51cb136733c31b3847261b5d7b94b13ab66e3ea2faa30e3d2b"} Jan 09 11:22:59 crc kubenswrapper[4727]: I0109 11:22:59.591379 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerStarted","Data":"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe"} Jan 09 11:23:00 crc kubenswrapper[4727]: I0109 11:23:00.607177 4727 generic.go:334] "Generic (PLEG): container finished" podID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerID="72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe" exitCode=0 Jan 09 11:23:00 crc kubenswrapper[4727]: I0109 11:23:00.607345 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerDied","Data":"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe"} Jan 09 11:23:01 crc kubenswrapper[4727]: I0109 11:23:01.620478 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerStarted","Data":"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad"} Jan 09 11:23:01 crc kubenswrapper[4727]: I0109 11:23:01.646058 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ssm76" podStartSLOduration=3.071977875 podStartE2EDuration="6.646028651s" podCreationTimestamp="2026-01-09 11:22:55 +0000 UTC" firstStartedPulling="2026-01-09 11:22:57.569038668 +0000 UTC m=+2223.018943449" lastFinishedPulling="2026-01-09 11:23:01.143089444 +0000 UTC m=+2226.592994225" observedRunningTime="2026-01-09 11:23:01.64371864 +0000 UTC m=+2227.093623421" watchObservedRunningTime="2026-01-09 11:23:01.646028651 +0000 UTC m=+2227.095933432" Jan 09 11:23:05 crc kubenswrapper[4727]: I0109 11:23:05.951166 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:05 crc kubenswrapper[4727]: I0109 11:23:05.954271 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:06 crc kubenswrapper[4727]: I0109 11:23:06.013874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:06 crc kubenswrapper[4727]: I0109 11:23:06.730051 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:06 crc kubenswrapper[4727]: I0109 11:23:06.782996 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:23:08 crc kubenswrapper[4727]: I0109 11:23:08.696828 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ssm76" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="registry-server" containerID="cri-o://cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" gracePeriod=2 Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.149227 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.329952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") pod \"a547e222-4018-4b48-b858-e6dd84f85cb1\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.330268 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") pod \"a547e222-4018-4b48-b858-e6dd84f85cb1\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.330325 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") pod \"a547e222-4018-4b48-b858-e6dd84f85cb1\" (UID: \"a547e222-4018-4b48-b858-e6dd84f85cb1\") " Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.331641 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities" (OuterVolumeSpecName: "utilities") pod "a547e222-4018-4b48-b858-e6dd84f85cb1" (UID: "a547e222-4018-4b48-b858-e6dd84f85cb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.338844 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr" (OuterVolumeSpecName: "kube-api-access-jhbrr") pod "a547e222-4018-4b48-b858-e6dd84f85cb1" (UID: "a547e222-4018-4b48-b858-e6dd84f85cb1"). InnerVolumeSpecName "kube-api-access-jhbrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.385386 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a547e222-4018-4b48-b858-e6dd84f85cb1" (UID: "a547e222-4018-4b48-b858-e6dd84f85cb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.405935 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.406020 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.433099 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbrr\" (UniqueName: \"kubernetes.io/projected/a547e222-4018-4b48-b858-e6dd84f85cb1-kube-api-access-jhbrr\") on node \"crc\" DevicePath \"\"" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.433135 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.433145 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a547e222-4018-4b48-b858-e6dd84f85cb1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711363 4727 generic.go:334] "Generic (PLEG): container finished" podID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerID="cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" exitCode=0 Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711457 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerDied","Data":"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad"} Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711531 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssm76" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssm76" event={"ID":"a547e222-4018-4b48-b858-e6dd84f85cb1","Type":"ContainerDied","Data":"727a60c621ddad51cb136733c31b3847261b5d7b94b13ab66e3ea2faa30e3d2b"} Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.711589 4727 scope.go:117] "RemoveContainer" containerID="cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.751315 4727 scope.go:117] "RemoveContainer" containerID="72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.758702 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.768840 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ssm76"] Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.776797 4727 scope.go:117] "RemoveContainer" containerID="2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.820114 4727 scope.go:117] "RemoveContainer" containerID="cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" Jan 09 11:23:09 crc kubenswrapper[4727]: E0109 11:23:09.821415 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad\": container with ID starting with cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad not found: ID does not exist" containerID="cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.821482 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad"} err="failed to get container status \"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad\": rpc error: code = NotFound desc = could not find container \"cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad\": container with ID starting with cec504367c4849d37d7175bd9f5e24476cd9395ae64278aca0b195d82a40d2ad not found: ID does not exist" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.821585 4727 scope.go:117] "RemoveContainer" containerID="72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe" Jan 09 11:23:09 crc kubenswrapper[4727]: E0109 11:23:09.822136 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe\": container with ID starting with 72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe not found: ID does not exist" containerID="72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.822186 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe"} err="failed to get container status \"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe\": rpc error: code = NotFound desc = could not find container \"72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe\": container with ID starting with 72b3759f9966cd845e6d156d008bb6e8d67af429db784b8eee8d8ad02a9dc0fe not found: ID does not exist" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.822210 4727 scope.go:117] "RemoveContainer" containerID="2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3" Jan 09 11:23:09 crc kubenswrapper[4727]: E0109 11:23:09.822755 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3\": container with ID starting with 2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3 not found: ID does not exist" containerID="2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3" Jan 09 11:23:09 crc kubenswrapper[4727]: I0109 11:23:09.822788 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3"} err="failed to get container status \"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3\": rpc error: code = NotFound desc = could not find container \"2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3\": container with ID starting with 2b0d8785543e50b695485acd7384f71f8c85de7aa289e87f9c5a74661d3c9be3 not found: ID does not exist" Jan 09 11:23:10 crc kubenswrapper[4727]: I0109 11:23:10.874810 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" path="/var/lib/kubelet/pods/a547e222-4018-4b48-b858-e6dd84f85cb1/volumes" Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.404617 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.405403 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.405465 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.406345 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:23:39 crc kubenswrapper[4727]: I0109 11:23:39.406407 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" gracePeriod=600 Jan 09 11:23:39 crc kubenswrapper[4727]: E0109 11:23:39.539381 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:23:40 crc kubenswrapper[4727]: I0109 11:23:40.113874 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" exitCode=0 Jan 09 11:23:40 crc kubenswrapper[4727]: I0109 11:23:40.113983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82"} Jan 09 11:23:40 crc kubenswrapper[4727]: I0109 11:23:40.114418 4727 scope.go:117] "RemoveContainer" containerID="c16e44070da2aff8cc30eed95ab5b54ecbda650a4a9081340001aecf62124ccd" Jan 09 11:23:40 crc kubenswrapper[4727]: I0109 11:23:40.116112 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:23:40 crc kubenswrapper[4727]: E0109 11:23:40.116495 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:23:52 crc kubenswrapper[4727]: I0109 11:23:52.861029 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:23:52 crc kubenswrapper[4727]: E0109 11:23:52.863004 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:06 crc kubenswrapper[4727]: I0109 11:24:06.860682 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:06 crc kubenswrapper[4727]: E0109 11:24:06.863730 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:20 crc kubenswrapper[4727]: I0109 11:24:20.868405 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:20 crc kubenswrapper[4727]: E0109 11:24:20.875193 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:35 crc kubenswrapper[4727]: I0109 11:24:35.860591 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:35 crc kubenswrapper[4727]: E0109 11:24:35.861374 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:48 crc kubenswrapper[4727]: I0109 11:24:48.860377 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:48 crc kubenswrapper[4727]: E0109 11:24:48.861194 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:24:59 crc kubenswrapper[4727]: I0109 11:24:59.860834 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:24:59 crc kubenswrapper[4727]: E0109 11:24:59.862019 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:25:12 crc kubenswrapper[4727]: I0109 11:25:12.861571 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:25:12 crc kubenswrapper[4727]: E0109 11:25:12.862767 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:25:27 crc kubenswrapper[4727]: I0109 11:25:27.861211 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:25:27 crc kubenswrapper[4727]: E0109 11:25:27.862230 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:25:40 crc kubenswrapper[4727]: I0109 11:25:40.860774 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:25:40 crc kubenswrapper[4727]: E0109 11:25:40.862809 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:25:54 crc kubenswrapper[4727]: I0109 11:25:54.860740 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:25:54 crc kubenswrapper[4727]: E0109 11:25:54.862057 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:07 crc kubenswrapper[4727]: I0109 11:26:07.861749 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:07 crc kubenswrapper[4727]: E0109 11:26:07.862708 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:18 crc kubenswrapper[4727]: I0109 11:26:18.867597 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:18 crc kubenswrapper[4727]: E0109 11:26:18.870113 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:29 crc kubenswrapper[4727]: I0109 11:26:29.861240 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:29 crc kubenswrapper[4727]: E0109 11:26:29.862246 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:40 crc kubenswrapper[4727]: I0109 11:26:40.861380 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:40 crc kubenswrapper[4727]: E0109 11:26:40.862366 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:50 crc kubenswrapper[4727]: I0109 11:26:50.174837 4727 generic.go:334] "Generic (PLEG): container finished" podID="a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" containerID="c8c2f367edb0664189b6ee0a5ac5f8874637772a39b40812888801e33cc22027" exitCode=0 Jan 09 11:26:50 crc kubenswrapper[4727]: I0109 11:26:50.174919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" event={"ID":"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5","Type":"ContainerDied","Data":"c8c2f367edb0664189b6ee0a5ac5f8874637772a39b40812888801e33cc22027"} Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.691131 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.894861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.895347 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.895482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.895559 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.895746 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") pod \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\" (UID: \"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5\") " Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.905838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql" (OuterVolumeSpecName: "kube-api-access-fb9ql") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "kube-api-access-fb9ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.907272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.932392 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.938648 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory" (OuterVolumeSpecName: "inventory") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.938967 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" (UID: "a56270d2-f80b-4dda-a64c-fe39d4b4a9e5"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998200 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998243 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998253 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb9ql\" (UniqueName: \"kubernetes.io/projected/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-kube-api-access-fb9ql\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998264 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:51 crc kubenswrapper[4727]: I0109 11:26:51.998274 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56270d2-f80b-4dda-a64c-fe39d4b4a9e5-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.202769 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" event={"ID":"a56270d2-f80b-4dda-a64c-fe39d4b4a9e5","Type":"ContainerDied","Data":"6f49ac9c9911a0566289b8031b75d8ac26fc7bc544ef7b7da479b4fe3906f46a"} Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.203171 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f49ac9c9911a0566289b8031b75d8ac26fc7bc544ef7b7da479b4fe3906f46a" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.202849 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zs24v" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.335563 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc"] Jan 09 11:26:52 crc kubenswrapper[4727]: E0109 11:26:52.336648 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="registry-server" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.336671 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="registry-server" Jan 09 11:26:52 crc kubenswrapper[4727]: E0109 11:26:52.336721 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.336729 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 11:26:52 crc kubenswrapper[4727]: E0109 11:26:52.336782 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="extract-content" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.336796 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="extract-content" Jan 09 11:26:52 crc kubenswrapper[4727]: E0109 11:26:52.336828 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="extract-utilities" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.336840 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="extract-utilities" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.337156 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a547e222-4018-4b48-b858-e6dd84f85cb1" containerName="registry-server" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.337181 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56270d2-f80b-4dda-a64c-fe39d4b4a9e5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.338344 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.345918 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.346406 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.346553 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.346711 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.347299 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.347450 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.347645 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.353572 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc"] Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406576 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406677 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406787 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406816 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406835 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.406879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509127 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509222 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509336 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509412 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509598 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.509638 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.512293 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.515754 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.515969 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.516410 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.516639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.516944 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.517460 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.517490 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.529733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s9spc\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:52 crc kubenswrapper[4727]: I0109 11:26:52.667547 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:26:53 crc kubenswrapper[4727]: I0109 11:26:53.268585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc"] Jan 09 11:26:53 crc kubenswrapper[4727]: I0109 11:26:53.270489 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:26:54 crc kubenswrapper[4727]: I0109 11:26:54.224662 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" event={"ID":"291b6783-3c71-4449-b696-27c7c340c41a","Type":"ContainerStarted","Data":"e96c38b34971938a13a5d95cc7e9e5bb9f0334f54e93107a458540de51932122"} Jan 09 11:26:54 crc kubenswrapper[4727]: I0109 11:26:54.877162 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:26:54 crc kubenswrapper[4727]: E0109 11:26:54.878390 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:26:55 crc kubenswrapper[4727]: I0109 11:26:55.235863 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" event={"ID":"291b6783-3c71-4449-b696-27c7c340c41a","Type":"ContainerStarted","Data":"b97c7281572885beb0fb4a270a332ed5b2e1e4e28d4b6930d596c07bdbbb787b"} Jan 09 11:26:55 crc kubenswrapper[4727]: I0109 11:26:55.264377 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" podStartSLOduration=2.470818497 podStartE2EDuration="3.264353663s" podCreationTimestamp="2026-01-09 11:26:52 +0000 UTC" firstStartedPulling="2026-01-09 11:26:53.269456649 +0000 UTC m=+2458.719361430" lastFinishedPulling="2026-01-09 11:26:54.062991825 +0000 UTC m=+2459.512896596" observedRunningTime="2026-01-09 11:26:55.264039115 +0000 UTC m=+2460.713943946" watchObservedRunningTime="2026-01-09 11:26:55.264353663 +0000 UTC m=+2460.714258454" Jan 09 11:27:08 crc kubenswrapper[4727]: I0109 11:27:08.861162 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:08 crc kubenswrapper[4727]: E0109 11:27:08.862596 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:27:19 crc kubenswrapper[4727]: I0109 11:27:19.861656 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:19 crc kubenswrapper[4727]: E0109 11:27:19.863590 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:27:31 crc kubenswrapper[4727]: I0109 11:27:31.861245 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:31 crc kubenswrapper[4727]: E0109 11:27:31.862281 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:27:43 crc kubenswrapper[4727]: I0109 11:27:43.861160 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:43 crc kubenswrapper[4727]: E0109 11:27:43.862251 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:27:58 crc kubenswrapper[4727]: I0109 11:27:58.860896 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:27:58 crc kubenswrapper[4727]: E0109 11:27:58.861881 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:28:09 crc kubenswrapper[4727]: I0109 11:28:09.861031 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:28:09 crc kubenswrapper[4727]: E0109 11:28:09.863165 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:28:22 crc kubenswrapper[4727]: I0109 11:28:22.860951 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:28:22 crc kubenswrapper[4727]: E0109 11:28:22.863436 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:28:37 crc kubenswrapper[4727]: I0109 11:28:37.861772 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:28:37 crc kubenswrapper[4727]: E0109 11:28:37.862991 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:28:41 crc kubenswrapper[4727]: I0109 11:28:41.938229 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:28:41 crc kubenswrapper[4727]: I0109 11:28:41.941959 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:41 crc kubenswrapper[4727]: I0109 11:28:41.957009 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.079722 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.079800 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.079828 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.181777 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.181855 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.181878 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.182634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.182642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.218399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") pod \"redhat-operators-hgk2v\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.277255 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:42 crc kubenswrapper[4727]: I0109 11:28:42.895428 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:28:43 crc kubenswrapper[4727]: I0109 11:28:43.358129 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerStarted","Data":"2dbfd7c16220db57790802d1d9d60761a735bdf5de8eb47bf70b9c8a4a1de75b"} Jan 09 11:28:44 crc kubenswrapper[4727]: I0109 11:28:44.367708 4727 generic.go:334] "Generic (PLEG): container finished" podID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerID="bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b" exitCode=0 Jan 09 11:28:44 crc kubenswrapper[4727]: I0109 11:28:44.367801 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerDied","Data":"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b"} Jan 09 11:28:46 crc kubenswrapper[4727]: I0109 11:28:46.393175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerStarted","Data":"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60"} Jan 09 11:28:47 crc kubenswrapper[4727]: I0109 11:28:47.404712 4727 generic.go:334] "Generic (PLEG): container finished" podID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerID="1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60" exitCode=0 Jan 09 11:28:47 crc kubenswrapper[4727]: I0109 11:28:47.404787 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerDied","Data":"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60"} Jan 09 11:28:49 crc kubenswrapper[4727]: I0109 11:28:49.429450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerStarted","Data":"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38"} Jan 09 11:28:49 crc kubenswrapper[4727]: I0109 11:28:49.462483 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hgk2v" podStartSLOduration=4.624964721 podStartE2EDuration="8.462453212s" podCreationTimestamp="2026-01-09 11:28:41 +0000 UTC" firstStartedPulling="2026-01-09 11:28:44.369783261 +0000 UTC m=+2569.819688042" lastFinishedPulling="2026-01-09 11:28:48.207271752 +0000 UTC m=+2573.657176533" observedRunningTime="2026-01-09 11:28:49.449242657 +0000 UTC m=+2574.899147448" watchObservedRunningTime="2026-01-09 11:28:49.462453212 +0000 UTC m=+2574.912357993" Jan 09 11:28:49 crc kubenswrapper[4727]: I0109 11:28:49.861073 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:28:50 crc kubenswrapper[4727]: I0109 11:28:50.445711 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884"} Jan 09 11:28:52 crc kubenswrapper[4727]: I0109 11:28:52.278146 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:52 crc kubenswrapper[4727]: I0109 11:28:52.279287 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:28:53 crc kubenswrapper[4727]: I0109 11:28:53.329544 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hgk2v" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" probeResult="failure" output=< Jan 09 11:28:53 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 11:28:53 crc kubenswrapper[4727]: > Jan 09 11:29:02 crc kubenswrapper[4727]: I0109 11:29:02.329946 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:29:02 crc kubenswrapper[4727]: I0109 11:29:02.386464 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:29:02 crc kubenswrapper[4727]: I0109 11:29:02.577940 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:29:03 crc kubenswrapper[4727]: I0109 11:29:03.580161 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hgk2v" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" containerID="cri-o://811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" gracePeriod=2 Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.195985 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.302213 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") pod \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.302377 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") pod \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.302531 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") pod \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\" (UID: \"a561451a-0ba0-48cb-bf09-b9a12d49c7ef\") " Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.303456 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities" (OuterVolumeSpecName: "utilities") pod "a561451a-0ba0-48cb-bf09-b9a12d49c7ef" (UID: "a561451a-0ba0-48cb-bf09-b9a12d49c7ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.310848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6" (OuterVolumeSpecName: "kube-api-access-mbpb6") pod "a561451a-0ba0-48cb-bf09-b9a12d49c7ef" (UID: "a561451a-0ba0-48cb-bf09-b9a12d49c7ef"). InnerVolumeSpecName "kube-api-access-mbpb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.405792 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.406351 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbpb6\" (UniqueName: \"kubernetes.io/projected/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-kube-api-access-mbpb6\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.435043 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a561451a-0ba0-48cb-bf09-b9a12d49c7ef" (UID: "a561451a-0ba0-48cb-bf09-b9a12d49c7ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.508831 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a561451a-0ba0-48cb-bf09-b9a12d49c7ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595013 4727 generic.go:334] "Generic (PLEG): container finished" podID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerID="811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" exitCode=0 Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerDied","Data":"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38"} Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595196 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgk2v" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595230 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgk2v" event={"ID":"a561451a-0ba0-48cb-bf09-b9a12d49c7ef","Type":"ContainerDied","Data":"2dbfd7c16220db57790802d1d9d60761a735bdf5de8eb47bf70b9c8a4a1de75b"} Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.595268 4727 scope.go:117] "RemoveContainer" containerID="811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.623105 4727 scope.go:117] "RemoveContainer" containerID="1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.655529 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.668121 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hgk2v"] Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.672446 4727 scope.go:117] "RemoveContainer" containerID="bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.736838 4727 scope.go:117] "RemoveContainer" containerID="811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" Jan 09 11:29:04 crc kubenswrapper[4727]: E0109 11:29:04.737763 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38\": container with ID starting with 811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38 not found: ID does not exist" containerID="811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.737803 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38"} err="failed to get container status \"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38\": rpc error: code = NotFound desc = could not find container \"811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38\": container with ID starting with 811dad006a4ded70cf3ba8ed8c151bc44c0551169f329938cff762dbd1daac38 not found: ID does not exist" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.737843 4727 scope.go:117] "RemoveContainer" containerID="1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60" Jan 09 11:29:04 crc kubenswrapper[4727]: E0109 11:29:04.747843 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60\": container with ID starting with 1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60 not found: ID does not exist" containerID="1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.747902 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60"} err="failed to get container status \"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60\": rpc error: code = NotFound desc = could not find container \"1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60\": container with ID starting with 1efe0fea8505dc69e3395897fa82db43250d0972cf1662fb62ba6c7be5c73a60 not found: ID does not exist" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.747942 4727 scope.go:117] "RemoveContainer" containerID="bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b" Jan 09 11:29:04 crc kubenswrapper[4727]: E0109 11:29:04.748408 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b\": container with ID starting with bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b not found: ID does not exist" containerID="bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.748436 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b"} err="failed to get container status \"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b\": rpc error: code = NotFound desc = could not find container \"bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b\": container with ID starting with bf9df010ec43f27b80e93921ac61eacf57c71c695193388c724f9345fae4103b not found: ID does not exist" Jan 09 11:29:04 crc kubenswrapper[4727]: I0109 11:29:04.874390 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" path="/var/lib/kubelet/pods/a561451a-0ba0-48cb-bf09-b9a12d49c7ef/volumes" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.257076 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:15 crc kubenswrapper[4727]: E0109 11:29:15.258372 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.258391 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" Jan 09 11:29:15 crc kubenswrapper[4727]: E0109 11:29:15.258417 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="extract-utilities" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.258425 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="extract-utilities" Jan 09 11:29:15 crc kubenswrapper[4727]: E0109 11:29:15.258471 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="extract-content" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.258480 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="extract-content" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.258710 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a561451a-0ba0-48cb-bf09-b9a12d49c7ef" containerName="registry-server" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.263956 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.273170 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.379487 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.379587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.379647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.481749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.481964 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.481990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.482569 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.482650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.508491 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") pod \"redhat-marketplace-mj2kv\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:15 crc kubenswrapper[4727]: I0109 11:29:15.586646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:16 crc kubenswrapper[4727]: I0109 11:29:16.140224 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:16 crc kubenswrapper[4727]: I0109 11:29:16.925127 4727 generic.go:334] "Generic (PLEG): container finished" podID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerID="d3d419655e0a8c088b2e588edae2dd1ed27724f48dd1d110bfe6363f8810c59b" exitCode=0 Jan 09 11:29:16 crc kubenswrapper[4727]: I0109 11:29:16.925190 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerDied","Data":"d3d419655e0a8c088b2e588edae2dd1ed27724f48dd1d110bfe6363f8810c59b"} Jan 09 11:29:16 crc kubenswrapper[4727]: I0109 11:29:16.925228 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerStarted","Data":"5196fa7b96d91e2af73fde39e5560330346d1e1ff711007beb9c427b472ce53d"} Jan 09 11:29:18 crc kubenswrapper[4727]: I0109 11:29:18.947549 4727 generic.go:334] "Generic (PLEG): container finished" podID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerID="99847b0f5d5ea4c9025ebfd6014ccb30de8cbd7f9e5fec2e95c6213ae7fa5f84" exitCode=0 Jan 09 11:29:18 crc kubenswrapper[4727]: I0109 11:29:18.947607 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerDied","Data":"99847b0f5d5ea4c9025ebfd6014ccb30de8cbd7f9e5fec2e95c6213ae7fa5f84"} Jan 09 11:29:19 crc kubenswrapper[4727]: I0109 11:29:19.969780 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerStarted","Data":"1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3"} Jan 09 11:29:19 crc kubenswrapper[4727]: I0109 11:29:19.998252 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mj2kv" podStartSLOduration=2.339687435 podStartE2EDuration="4.998226166s" podCreationTimestamp="2026-01-09 11:29:15 +0000 UTC" firstStartedPulling="2026-01-09 11:29:16.927127065 +0000 UTC m=+2602.377031846" lastFinishedPulling="2026-01-09 11:29:19.585665796 +0000 UTC m=+2605.035570577" observedRunningTime="2026-01-09 11:29:19.988094929 +0000 UTC m=+2605.437999710" watchObservedRunningTime="2026-01-09 11:29:19.998226166 +0000 UTC m=+2605.448130947" Jan 09 11:29:20 crc kubenswrapper[4727]: I0109 11:29:20.981997 4727 generic.go:334] "Generic (PLEG): container finished" podID="291b6783-3c71-4449-b696-27c7c340c41a" containerID="b97c7281572885beb0fb4a270a332ed5b2e1e4e28d4b6930d596c07bdbbb787b" exitCode=0 Jan 09 11:29:20 crc kubenswrapper[4727]: I0109 11:29:20.982054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" event={"ID":"291b6783-3c71-4449-b696-27c7c340c41a","Type":"ContainerDied","Data":"b97c7281572885beb0fb4a270a332ed5b2e1e4e28d4b6930d596c07bdbbb787b"} Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.474872 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551000 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551091 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551258 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551285 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551384 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551488 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551573 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551670 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.551733 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") pod \"291b6783-3c71-4449-b696-27c7c340c41a\" (UID: \"291b6783-3c71-4449-b696-27c7c340c41a\") " Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.559409 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h" (OuterVolumeSpecName: "kube-api-access-sbt5h") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "kube-api-access-sbt5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.570941 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.584918 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.588692 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.594404 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.597776 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory" (OuterVolumeSpecName: "inventory") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.598082 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.603065 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.612139 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "291b6783-3c71-4449-b696-27c7c340c41a" (UID: "291b6783-3c71-4449-b696-27c7c340c41a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.654954 4727 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655000 4727 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655010 4727 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655018 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655026 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbt5h\" (UniqueName: \"kubernetes.io/projected/291b6783-3c71-4449-b696-27c7c340c41a-kube-api-access-sbt5h\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655034 4727 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655044 4727 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655052 4727 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/291b6783-3c71-4449-b696-27c7c340c41a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:22 crc kubenswrapper[4727]: I0109 11:29:22.655061 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291b6783-3c71-4449-b696-27c7c340c41a-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.016650 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" event={"ID":"291b6783-3c71-4449-b696-27c7c340c41a","Type":"ContainerDied","Data":"e96c38b34971938a13a5d95cc7e9e5bb9f0334f54e93107a458540de51932122"} Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.016729 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e96c38b34971938a13a5d95cc7e9e5bb9f0334f54e93107a458540de51932122" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.016824 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s9spc" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.133206 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5"] Jan 09 11:29:23 crc kubenswrapper[4727]: E0109 11:29:23.133798 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291b6783-3c71-4449-b696-27c7c340c41a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.133825 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="291b6783-3c71-4449-b696-27c7c340c41a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.134035 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="291b6783-3c71-4449-b696-27c7c340c41a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.134854 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142032 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142070 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142222 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142282 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-h4dvw" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.142403 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.165425 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5"] Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.268658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269282 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269336 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269374 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269436 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.269458 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.270221 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.372831 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.372910 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.372953 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.373011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.374009 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.374067 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.374093 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.377962 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.377993 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.378385 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.378882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.380767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.384446 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.402144 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:23 crc kubenswrapper[4727]: I0109 11:29:23.465069 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:29:24 crc kubenswrapper[4727]: I0109 11:29:24.013461 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5"] Jan 09 11:29:24 crc kubenswrapper[4727]: I0109 11:29:24.032438 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" event={"ID":"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc","Type":"ContainerStarted","Data":"5215f2c39a133eb2ca530e9330648f3c15663b75c3c08b1dcc95a75b53b789ae"} Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.043947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" event={"ID":"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc","Type":"ContainerStarted","Data":"d98d9a6875efb3d63e2cbb7a99d54696008a62492d141c221d77dc675ea3743f"} Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.082840 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" podStartSLOduration=1.6523808519999998 podStartE2EDuration="2.082813432s" podCreationTimestamp="2026-01-09 11:29:23 +0000 UTC" firstStartedPulling="2026-01-09 11:29:24.018437711 +0000 UTC m=+2609.468342492" lastFinishedPulling="2026-01-09 11:29:24.448870291 +0000 UTC m=+2609.898775072" observedRunningTime="2026-01-09 11:29:25.067338215 +0000 UTC m=+2610.517243026" watchObservedRunningTime="2026-01-09 11:29:25.082813432 +0000 UTC m=+2610.532718213" Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.586780 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.586916 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:25 crc kubenswrapper[4727]: I0109 11:29:25.641689 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:26 crc kubenswrapper[4727]: I0109 11:29:26.102778 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:26 crc kubenswrapper[4727]: I0109 11:29:26.158254 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:28 crc kubenswrapper[4727]: I0109 11:29:28.072953 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mj2kv" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="registry-server" containerID="cri-o://1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3" gracePeriod=2 Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.084226 4727 generic.go:334] "Generic (PLEG): container finished" podID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerID="1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3" exitCode=0 Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.084319 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerDied","Data":"1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3"} Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.202956 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.309794 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") pod \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.310067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") pod \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.310111 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") pod \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\" (UID: \"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b\") " Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.310903 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities" (OuterVolumeSpecName: "utilities") pod "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" (UID: "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.317295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk" (OuterVolumeSpecName: "kube-api-access-wqgzk") pod "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" (UID: "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b"). InnerVolumeSpecName "kube-api-access-wqgzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.334872 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" (UID: "0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.413385 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqgzk\" (UniqueName: \"kubernetes.io/projected/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-kube-api-access-wqgzk\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.413439 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:29 crc kubenswrapper[4727]: I0109 11:29:29.413454 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.099708 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj2kv" event={"ID":"0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b","Type":"ContainerDied","Data":"5196fa7b96d91e2af73fde39e5560330346d1e1ff711007beb9c427b472ce53d"} Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.099834 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj2kv" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.100142 4727 scope.go:117] "RemoveContainer" containerID="1cb457f4f400d49849c79ceb1e334b3bbd3651d9b8b66cde1b555ffc6ae076b3" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.131744 4727 scope.go:117] "RemoveContainer" containerID="99847b0f5d5ea4c9025ebfd6014ccb30de8cbd7f9e5fec2e95c6213ae7fa5f84" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.161776 4727 scope.go:117] "RemoveContainer" containerID="d3d419655e0a8c088b2e588edae2dd1ed27724f48dd1d110bfe6363f8810c59b" Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.172301 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.184030 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj2kv"] Jan 09 11:29:30 crc kubenswrapper[4727]: I0109 11:29:30.875336 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" path="/var/lib/kubelet/pods/0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b/volumes" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.158365 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd"] Jan 09 11:30:00 crc kubenswrapper[4727]: E0109 11:30:00.159998 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="registry-server" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.160018 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="registry-server" Jan 09 11:30:00 crc kubenswrapper[4727]: E0109 11:30:00.160058 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="extract-utilities" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.160067 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="extract-utilities" Jan 09 11:30:00 crc kubenswrapper[4727]: E0109 11:30:00.160101 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="extract-content" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.160109 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="extract-content" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.160361 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0902725f-4ad2-4ca7-a3cf-c3830cbb7c7b" containerName="registry-server" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.161495 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.168963 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.184127 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd"] Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.200339 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.224776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.224917 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.225014 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.328045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.328283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.328332 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.329747 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.342794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.347184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") pod \"collect-profiles-29465970-s9lxd\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:00 crc kubenswrapper[4727]: I0109 11:30:00.523082 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:01 crc kubenswrapper[4727]: I0109 11:30:01.015012 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd"] Jan 09 11:30:01 crc kubenswrapper[4727]: I0109 11:30:01.431684 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" event={"ID":"d4a89b8e-3a44-4294-9077-c4496fb4c6dc","Type":"ContainerStarted","Data":"2cca7262ad28ec1090ad80f914dc4e5864d23da2de8952de560f92f61e1d3514"} Jan 09 11:30:01 crc kubenswrapper[4727]: I0109 11:30:01.431745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" event={"ID":"d4a89b8e-3a44-4294-9077-c4496fb4c6dc","Type":"ContainerStarted","Data":"ee8a459daa813a8c8b9c2d87b9db0ce68c3b3c16df0a9e20c7b31ccb15637732"} Jan 09 11:30:01 crc kubenswrapper[4727]: I0109 11:30:01.490773 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" podStartSLOduration=1.490746046 podStartE2EDuration="1.490746046s" podCreationTimestamp="2026-01-09 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:30:01.466939699 +0000 UTC m=+2646.916844490" watchObservedRunningTime="2026-01-09 11:30:01.490746046 +0000 UTC m=+2646.940650817" Jan 09 11:30:02 crc kubenswrapper[4727]: I0109 11:30:02.442944 4727 generic.go:334] "Generic (PLEG): container finished" podID="d4a89b8e-3a44-4294-9077-c4496fb4c6dc" containerID="2cca7262ad28ec1090ad80f914dc4e5864d23da2de8952de560f92f61e1d3514" exitCode=0 Jan 09 11:30:02 crc kubenswrapper[4727]: I0109 11:30:02.443016 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" event={"ID":"d4a89b8e-3a44-4294-9077-c4496fb4c6dc","Type":"ContainerDied","Data":"2cca7262ad28ec1090ad80f914dc4e5864d23da2de8952de560f92f61e1d3514"} Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.816964 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.905247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") pod \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.905467 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") pod \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.905520 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") pod \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\" (UID: \"d4a89b8e-3a44-4294-9077-c4496fb4c6dc\") " Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.906608 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4a89b8e-3a44-4294-9077-c4496fb4c6dc" (UID: "d4a89b8e-3a44-4294-9077-c4496fb4c6dc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.914474 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5" (OuterVolumeSpecName: "kube-api-access-8kqh5") pod "d4a89b8e-3a44-4294-9077-c4496fb4c6dc" (UID: "d4a89b8e-3a44-4294-9077-c4496fb4c6dc"). InnerVolumeSpecName "kube-api-access-8kqh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:30:03 crc kubenswrapper[4727]: I0109 11:30:03.914603 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4a89b8e-3a44-4294-9077-c4496fb4c6dc" (UID: "d4a89b8e-3a44-4294-9077-c4496fb4c6dc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.009668 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.009865 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kqh5\" (UniqueName: \"kubernetes.io/projected/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-kube-api-access-8kqh5\") on node \"crc\" DevicePath \"\"" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.009878 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4a89b8e-3a44-4294-9077-c4496fb4c6dc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.464575 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" event={"ID":"d4a89b8e-3a44-4294-9077-c4496fb4c6dc","Type":"ContainerDied","Data":"ee8a459daa813a8c8b9c2d87b9db0ce68c3b3c16df0a9e20c7b31ccb15637732"} Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.464629 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee8a459daa813a8c8b9c2d87b9db0ce68c3b3c16df0a9e20c7b31ccb15637732" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.464694 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465970-s9lxd" Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.902523 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 11:30:04 crc kubenswrapper[4727]: I0109 11:30:04.910932 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465925-66zzw"] Jan 09 11:30:06 crc kubenswrapper[4727]: I0109 11:30:06.871499 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd" path="/var/lib/kubelet/pods/a10bdc6b-0caf-48c6-a1f4-7b7b310d1afd/volumes" Jan 09 11:30:39 crc kubenswrapper[4727]: I0109 11:30:39.897685 4727 scope.go:117] "RemoveContainer" containerID="f8891a6ceb5a8bd1111f85d1497013020d91fd3ea1005f453e8623903820a18d" Jan 09 11:31:09 crc kubenswrapper[4727]: I0109 11:31:09.405418 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:31:09 crc kubenswrapper[4727]: I0109 11:31:09.406195 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:31:39 crc kubenswrapper[4727]: I0109 11:31:39.404933 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:31:39 crc kubenswrapper[4727]: I0109 11:31:39.406539 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:31:59 crc kubenswrapper[4727]: I0109 11:31:59.628753 4727 generic.go:334] "Generic (PLEG): container finished" podID="2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" containerID="d98d9a6875efb3d63e2cbb7a99d54696008a62492d141c221d77dc675ea3743f" exitCode=0 Jan 09 11:31:59 crc kubenswrapper[4727]: I0109 11:31:59.628825 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" event={"ID":"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc","Type":"ContainerDied","Data":"d98d9a6875efb3d63e2cbb7a99d54696008a62492d141c221d77dc675ea3743f"} Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.105077 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.275821 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.275898 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.275936 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.276026 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.276108 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.276280 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.276315 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") pod \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\" (UID: \"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc\") " Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.283273 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.285373 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2" (OuterVolumeSpecName: "kube-api-access-kprl2") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "kube-api-access-kprl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.307333 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.311535 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.314697 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.315567 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory" (OuterVolumeSpecName: "inventory") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.316942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" (UID: "2d4033a7-e7a4-495b-bbb9-63e8ae1189bc"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378799 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378844 4727 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378855 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378873 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kprl2\" (UniqueName: \"kubernetes.io/projected/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-kube-api-access-kprl2\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378889 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378902 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.378915 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2d4033a7-e7a4-495b-bbb9-63e8ae1189bc-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.652069 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" event={"ID":"2d4033a7-e7a4-495b-bbb9-63e8ae1189bc","Type":"ContainerDied","Data":"5215f2c39a133eb2ca530e9330648f3c15663b75c3c08b1dcc95a75b53b789ae"} Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.652541 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5215f2c39a133eb2ca530e9330648f3c15663b75c3c08b1dcc95a75b53b789ae" Jan 09 11:32:01 crc kubenswrapper[4727]: I0109 11:32:01.652145 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5" Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.404935 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.405909 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.406012 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.407099 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.407172 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884" gracePeriod=600 Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.735262 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884" exitCode=0 Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.735324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884"} Jan 09 11:32:09 crc kubenswrapper[4727]: I0109 11:32:09.735377 4727 scope.go:117] "RemoveContainer" containerID="11eaf6eaf3d1af8ea7f24d7f0dd81c09450154bd0c6843b327cfeebdfe9e9b82" Jan 09 11:32:10 crc kubenswrapper[4727]: I0109 11:32:10.746192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af"} Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.849663 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:38 crc kubenswrapper[4727]: E0109 11:32:38.851285 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.851309 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 11:32:38 crc kubenswrapper[4727]: E0109 11:32:38.851347 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a89b8e-3a44-4294-9077-c4496fb4c6dc" containerName="collect-profiles" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.851356 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a89b8e-3a44-4294-9077-c4496fb4c6dc" containerName="collect-profiles" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.851676 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a89b8e-3a44-4294-9077-c4496fb4c6dc" containerName="collect-profiles" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.851696 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4033a7-e7a4-495b-bbb9-63e8ae1189bc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.853564 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:38 crc kubenswrapper[4727]: I0109 11:32:38.877081 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.019601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.019669 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.020316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.123302 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.123543 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.123628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.124567 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.124777 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.158051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") pod \"certified-operators-xm82r\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.197605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:39 crc kubenswrapper[4727]: I0109 11:32:39.716343 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:40 crc kubenswrapper[4727]: I0109 11:32:40.039456 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerStarted","Data":"65d89f292522ae39d30d7ca95637f43d6e1816896a3fff2a9aecdbd4feee13c8"} Jan 09 11:32:41 crc kubenswrapper[4727]: I0109 11:32:41.051427 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerID="f445bb472615ef4dbb7efed1a020f1c3dfd8628284bfc5ece0df880d77dad63e" exitCode=0 Jan 09 11:32:41 crc kubenswrapper[4727]: I0109 11:32:41.051539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerDied","Data":"f445bb472615ef4dbb7efed1a020f1c3dfd8628284bfc5ece0df880d77dad63e"} Jan 09 11:32:41 crc kubenswrapper[4727]: I0109 11:32:41.054205 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:32:42 crc kubenswrapper[4727]: I0109 11:32:42.064675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerStarted","Data":"facf79eacfe3ba27d5328b6d90303d2afa354a543a058f6550ca075191ad3c5e"} Jan 09 11:32:43 crc kubenswrapper[4727]: I0109 11:32:43.077392 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerID="facf79eacfe3ba27d5328b6d90303d2afa354a543a058f6550ca075191ad3c5e" exitCode=0 Jan 09 11:32:43 crc kubenswrapper[4727]: I0109 11:32:43.077460 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerDied","Data":"facf79eacfe3ba27d5328b6d90303d2afa354a543a058f6550ca075191ad3c5e"} Jan 09 11:32:44 crc kubenswrapper[4727]: I0109 11:32:44.088869 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerStarted","Data":"5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0"} Jan 09 11:32:44 crc kubenswrapper[4727]: I0109 11:32:44.117732 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xm82r" podStartSLOduration=3.459190481 podStartE2EDuration="6.117711812s" podCreationTimestamp="2026-01-09 11:32:38 +0000 UTC" firstStartedPulling="2026-01-09 11:32:41.053900843 +0000 UTC m=+2806.503805624" lastFinishedPulling="2026-01-09 11:32:43.712422174 +0000 UTC m=+2809.162326955" observedRunningTime="2026-01-09 11:32:44.114104437 +0000 UTC m=+2809.564009218" watchObservedRunningTime="2026-01-09 11:32:44.117711812 +0000 UTC m=+2809.567616593" Jan 09 11:32:49 crc kubenswrapper[4727]: I0109 11:32:49.198710 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:49 crc kubenswrapper[4727]: I0109 11:32:49.199313 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:49 crc kubenswrapper[4727]: I0109 11:32:49.268585 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:50 crc kubenswrapper[4727]: I0109 11:32:50.240442 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:50 crc kubenswrapper[4727]: I0109 11:32:50.300534 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:52 crc kubenswrapper[4727]: I0109 11:32:52.202237 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xm82r" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="registry-server" containerID="cri-o://5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0" gracePeriod=2 Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.216160 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerID="5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0" exitCode=0 Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.216236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerDied","Data":"5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0"} Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.216665 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm82r" event={"ID":"9e114ddd-6947-4f8d-9679-8c56d3c33bd9","Type":"ContainerDied","Data":"65d89f292522ae39d30d7ca95637f43d6e1816896a3fff2a9aecdbd4feee13c8"} Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.216685 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d89f292522ae39d30d7ca95637f43d6e1816896a3fff2a9aecdbd4feee13c8" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.256001 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.463612 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") pod \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.464826 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") pod \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.464865 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") pod \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\" (UID: \"9e114ddd-6947-4f8d-9679-8c56d3c33bd9\") " Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.465755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities" (OuterVolumeSpecName: "utilities") pod "9e114ddd-6947-4f8d-9679-8c56d3c33bd9" (UID: "9e114ddd-6947-4f8d-9679-8c56d3c33bd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.470899 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p" (OuterVolumeSpecName: "kube-api-access-m7g9p") pod "9e114ddd-6947-4f8d-9679-8c56d3c33bd9" (UID: "9e114ddd-6947-4f8d-9679-8c56d3c33bd9"). InnerVolumeSpecName "kube-api-access-m7g9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.507980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e114ddd-6947-4f8d-9679-8c56d3c33bd9" (UID: "9e114ddd-6947-4f8d-9679-8c56d3c33bd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.566861 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.566903 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7g9p\" (UniqueName: \"kubernetes.io/projected/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-kube-api-access-m7g9p\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:53 crc kubenswrapper[4727]: I0109 11:32:53.566917 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e114ddd-6947-4f8d-9679-8c56d3c33bd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:32:54 crc kubenswrapper[4727]: I0109 11:32:54.225965 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm82r" Jan 09 11:32:54 crc kubenswrapper[4727]: I0109 11:32:54.270464 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:54 crc kubenswrapper[4727]: I0109 11:32:54.281020 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xm82r"] Jan 09 11:32:54 crc kubenswrapper[4727]: I0109 11:32:54.895705 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" path="/var/lib/kubelet/pods/9e114ddd-6947-4f8d-9679-8c56d3c33bd9/volumes" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.735394 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 11:32:59 crc kubenswrapper[4727]: E0109 11:32:59.736736 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="extract-content" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.736755 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="extract-content" Jan 09 11:32:59 crc kubenswrapper[4727]: E0109 11:32:59.736783 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="extract-utilities" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.736793 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="extract-utilities" Jan 09 11:32:59 crc kubenswrapper[4727]: E0109 11:32:59.736840 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="registry-server" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.736848 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="registry-server" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.737121 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e114ddd-6947-4f8d-9679-8c56d3c33bd9" containerName="registry-server" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.738091 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.756473 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.774480 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.774498 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.774870 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ghr4t" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.777953 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.908728 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.908780 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.908952 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909015 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909093 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909338 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:32:59 crc kubenswrapper[4727]: I0109 11:32:59.909499 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.011728 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.011847 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.011934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012005 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012079 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012385 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.012878 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013903 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.013959 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.014492 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.023310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.023433 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.023617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.042702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.052096 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.104243 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 11:33:00 crc kubenswrapper[4727]: I0109 11:33:00.577869 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 09 11:33:01 crc kubenswrapper[4727]: I0109 11:33:01.306871 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e","Type":"ContainerStarted","Data":"8349c448d8e6552d0e3152e0251e4b01ee6c1b1475591f37b47c5feb06d40267"} Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.265921 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.276084 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.291698 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.468450 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.468771 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.468987 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.571360 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.571491 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.571659 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.572059 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.572178 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.599439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") pod \"community-operators-pm9fv\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:10 crc kubenswrapper[4727]: I0109 11:33:10.620112 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:11 crc kubenswrapper[4727]: I0109 11:33:11.210435 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:11 crc kubenswrapper[4727]: I0109 11:33:11.446804 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerStarted","Data":"550eac94714d5bbaee91f4e9a318d037390474a281cd19f3ba024ffdf68b2b5a"} Jan 09 11:33:15 crc kubenswrapper[4727]: I0109 11:33:15.505779 4727 generic.go:334] "Generic (PLEG): container finished" podID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerID="b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b" exitCode=0 Jan 09 11:33:15 crc kubenswrapper[4727]: I0109 11:33:15.505991 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerDied","Data":"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b"} Jan 09 11:33:35 crc kubenswrapper[4727]: E0109 11:33:35.495524 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 09 11:33:35 crc kubenswrapper[4727]: E0109 11:33:35.496219 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqnbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 09 11:33:35 crc kubenswrapper[4727]: E0109 11:33:35.497367 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" Jan 09 11:33:35 crc kubenswrapper[4727]: E0109 11:33:35.732333 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" Jan 09 11:33:36 crc kubenswrapper[4727]: I0109 11:33:36.742026 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerStarted","Data":"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720"} Jan 09 11:33:37 crc kubenswrapper[4727]: I0109 11:33:37.753305 4727 generic.go:334] "Generic (PLEG): container finished" podID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerID="4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720" exitCode=0 Jan 09 11:33:37 crc kubenswrapper[4727]: I0109 11:33:37.753384 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerDied","Data":"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720"} Jan 09 11:33:39 crc kubenswrapper[4727]: I0109 11:33:39.776886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerStarted","Data":"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8"} Jan 09 11:33:39 crc kubenswrapper[4727]: I0109 11:33:39.799361 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pm9fv" podStartSLOduration=26.429445206 podStartE2EDuration="29.799334195s" podCreationTimestamp="2026-01-09 11:33:10 +0000 UTC" firstStartedPulling="2026-01-09 11:33:35.357367348 +0000 UTC m=+2860.807272129" lastFinishedPulling="2026-01-09 11:33:38.727256337 +0000 UTC m=+2864.177161118" observedRunningTime="2026-01-09 11:33:39.796089937 +0000 UTC m=+2865.245994718" watchObservedRunningTime="2026-01-09 11:33:39.799334195 +0000 UTC m=+2865.249238976" Jan 09 11:33:40 crc kubenswrapper[4727]: I0109 11:33:40.620891 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:40 crc kubenswrapper[4727]: I0109 11:33:40.620951 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:41 crc kubenswrapper[4727]: I0109 11:33:41.679467 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pm9fv" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" probeResult="failure" output=< Jan 09 11:33:41 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 11:33:41 crc kubenswrapper[4727]: > Jan 09 11:33:50 crc kubenswrapper[4727]: I0109 11:33:50.698832 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:50 crc kubenswrapper[4727]: I0109 11:33:50.761809 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:50 crc kubenswrapper[4727]: I0109 11:33:50.946803 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:51 crc kubenswrapper[4727]: I0109 11:33:51.898620 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e","Type":"ContainerStarted","Data":"6fd71c43d4d8330f713c6bebee4de8234126f4e73026f0f31d0a1aa516bc5ecc"} Jan 09 11:33:51 crc kubenswrapper[4727]: I0109 11:33:51.898799 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pm9fv" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" containerID="cri-o://ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" gracePeriod=2 Jan 09 11:33:51 crc kubenswrapper[4727]: I0109 11:33:51.938705 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.021014776 podStartE2EDuration="53.938675957s" podCreationTimestamp="2026-01-09 11:32:58 +0000 UTC" firstStartedPulling="2026-01-09 11:33:00.58936034 +0000 UTC m=+2826.039265121" lastFinishedPulling="2026-01-09 11:33:50.507021511 +0000 UTC m=+2875.956926302" observedRunningTime="2026-01-09 11:33:51.915618443 +0000 UTC m=+2877.365523254" watchObservedRunningTime="2026-01-09 11:33:51.938675957 +0000 UTC m=+2877.388580748" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.403755 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.498096 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") pod \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.498392 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") pod \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.498450 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") pod \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\" (UID: \"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb\") " Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.504277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities" (OuterVolumeSpecName: "utilities") pod "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" (UID: "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.528120 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk" (OuterVolumeSpecName: "kube-api-access-6cctk") pod "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" (UID: "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb"). InnerVolumeSpecName "kube-api-access-6cctk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.563445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" (UID: "a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.602563 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cctk\" (UniqueName: \"kubernetes.io/projected/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-kube-api-access-6cctk\") on node \"crc\" DevicePath \"\"" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.602630 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.602650 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910829 4727 generic.go:334] "Generic (PLEG): container finished" podID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerID="ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" exitCode=0 Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910893 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerDied","Data":"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8"} Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910915 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm9fv" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm9fv" event={"ID":"a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb","Type":"ContainerDied","Data":"550eac94714d5bbaee91f4e9a318d037390474a281cd19f3ba024ffdf68b2b5a"} Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.910967 4727 scope.go:117] "RemoveContainer" containerID="ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.946068 4727 scope.go:117] "RemoveContainer" containerID="4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720" Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.949463 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.960839 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pm9fv"] Jan 09 11:33:52 crc kubenswrapper[4727]: I0109 11:33:52.982666 4727 scope.go:117] "RemoveContainer" containerID="b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.043144 4727 scope.go:117] "RemoveContainer" containerID="ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" Jan 09 11:33:53 crc kubenswrapper[4727]: E0109 11:33:53.043892 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8\": container with ID starting with ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8 not found: ID does not exist" containerID="ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.043960 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8"} err="failed to get container status \"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8\": rpc error: code = NotFound desc = could not find container \"ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8\": container with ID starting with ee89fc6c3ba198ff2159ae6b006863b44f3d7be201f61f5b2d896cd64271f4f8 not found: ID does not exist" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.043999 4727 scope.go:117] "RemoveContainer" containerID="4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720" Jan 09 11:33:53 crc kubenswrapper[4727]: E0109 11:33:53.044405 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720\": container with ID starting with 4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720 not found: ID does not exist" containerID="4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.044437 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720"} err="failed to get container status \"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720\": rpc error: code = NotFound desc = could not find container \"4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720\": container with ID starting with 4a5a1ed6158ae6d139e836987da84ec9eabb300a739e5b52c27d6885c6c59720 not found: ID does not exist" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.044460 4727 scope.go:117] "RemoveContainer" containerID="b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b" Jan 09 11:33:53 crc kubenswrapper[4727]: E0109 11:33:53.048864 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b\": container with ID starting with b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b not found: ID does not exist" containerID="b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b" Jan 09 11:33:53 crc kubenswrapper[4727]: I0109 11:33:53.048909 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b"} err="failed to get container status \"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b\": rpc error: code = NotFound desc = could not find container \"b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b\": container with ID starting with b5415704cb56686ca59f69a97879eb1ab63d3206ea6df4a9f80c62904151640b not found: ID does not exist" Jan 09 11:33:54 crc kubenswrapper[4727]: I0109 11:33:54.872955 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" path="/var/lib/kubelet/pods/a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb/volumes" Jan 09 11:34:39 crc kubenswrapper[4727]: I0109 11:34:39.405645 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:34:39 crc kubenswrapper[4727]: I0109 11:34:39.406484 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:35:09 crc kubenswrapper[4727]: I0109 11:35:09.405157 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:35:09 crc kubenswrapper[4727]: I0109 11:35:09.405803 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.405329 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.406304 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.406385 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.407451 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:35:39 crc kubenswrapper[4727]: I0109 11:35:39.407528 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" gracePeriod=600 Jan 09 11:35:39 crc kubenswrapper[4727]: E0109 11:35:39.547953 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:35:40 crc kubenswrapper[4727]: I0109 11:35:40.093532 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" exitCode=0 Jan 09 11:35:40 crc kubenswrapper[4727]: I0109 11:35:40.093656 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af"} Jan 09 11:35:40 crc kubenswrapper[4727]: I0109 11:35:40.094879 4727 scope.go:117] "RemoveContainer" containerID="045cc9b4f0a2e105dce4a1319ce62f5bf23b5460f4edcc28b6d59be076caf884" Jan 09 11:35:40 crc kubenswrapper[4727]: I0109 11:35:40.095005 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:35:40 crc kubenswrapper[4727]: E0109 11:35:40.095298 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:35:53 crc kubenswrapper[4727]: I0109 11:35:53.861288 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:35:53 crc kubenswrapper[4727]: E0109 11:35:53.862682 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:04 crc kubenswrapper[4727]: I0109 11:36:04.883014 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:04 crc kubenswrapper[4727]: E0109 11:36:04.884009 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:18 crc kubenswrapper[4727]: I0109 11:36:18.860690 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:18 crc kubenswrapper[4727]: E0109 11:36:18.861902 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:29 crc kubenswrapper[4727]: I0109 11:36:29.860955 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:29 crc kubenswrapper[4727]: E0109 11:36:29.864139 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:42 crc kubenswrapper[4727]: I0109 11:36:42.862150 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:42 crc kubenswrapper[4727]: E0109 11:36:42.863181 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:36:56 crc kubenswrapper[4727]: I0109 11:36:56.907596 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:36:56 crc kubenswrapper[4727]: E0109 11:36:56.908777 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:37:09 crc kubenswrapper[4727]: I0109 11:37:09.860898 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:37:09 crc kubenswrapper[4727]: E0109 11:37:09.862109 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:37:23 crc kubenswrapper[4727]: I0109 11:37:23.860489 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:37:23 crc kubenswrapper[4727]: E0109 11:37:23.863169 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:37:38 crc kubenswrapper[4727]: I0109 11:37:38.861274 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:37:38 crc kubenswrapper[4727]: E0109 11:37:38.862142 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:37:49 crc kubenswrapper[4727]: I0109 11:37:49.860731 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:37:49 crc kubenswrapper[4727]: E0109 11:37:49.861685 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:03 crc kubenswrapper[4727]: I0109 11:38:03.860960 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:03 crc kubenswrapper[4727]: E0109 11:38:03.862477 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:15 crc kubenswrapper[4727]: I0109 11:38:15.860691 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:15 crc kubenswrapper[4727]: E0109 11:38:15.867753 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:26 crc kubenswrapper[4727]: I0109 11:38:26.861691 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:26 crc kubenswrapper[4727]: E0109 11:38:26.862976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:40 crc kubenswrapper[4727]: I0109 11:38:40.152104 4727 scope.go:117] "RemoveContainer" containerID="f445bb472615ef4dbb7efed1a020f1c3dfd8628284bfc5ece0df880d77dad63e" Jan 09 11:38:40 crc kubenswrapper[4727]: I0109 11:38:40.860931 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:40 crc kubenswrapper[4727]: E0109 11:38:40.861612 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:38:52 crc kubenswrapper[4727]: I0109 11:38:52.861090 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:38:52 crc kubenswrapper[4727]: E0109 11:38:52.862036 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:04 crc kubenswrapper[4727]: I0109 11:39:04.869535 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:04 crc kubenswrapper[4727]: E0109 11:39:04.870712 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:18 crc kubenswrapper[4727]: I0109 11:39:18.861255 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:18 crc kubenswrapper[4727]: E0109 11:39:18.862470 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.814007 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:32 crc kubenswrapper[4727]: E0109 11:39:32.815469 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="extract-content" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.815487 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="extract-content" Jan 09 11:39:32 crc kubenswrapper[4727]: E0109 11:39:32.815551 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="extract-utilities" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.815561 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="extract-utilities" Jan 09 11:39:32 crc kubenswrapper[4727]: E0109 11:39:32.815576 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.815583 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.815878 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fa09b1-b89c-4ff0-828c-d8ee3e0dbcfb" containerName="registry-server" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.817996 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.830046 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.860824 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:32 crc kubenswrapper[4727]: E0109 11:39:32.861149 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.953991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.954191 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:32 crc kubenswrapper[4727]: I0109 11:39:32.954235 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.056567 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.056666 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.056749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.057182 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.057309 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.080395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") pod \"redhat-operators-q8jd5\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.141211 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:33 crc kubenswrapper[4727]: I0109 11:39:33.679136 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:34 crc kubenswrapper[4727]: I0109 11:39:34.577200 4727 generic.go:334] "Generic (PLEG): container finished" podID="42513cc8-0316-49f1-8062-74a805d1e27b" containerID="58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb" exitCode=0 Jan 09 11:39:34 crc kubenswrapper[4727]: I0109 11:39:34.577733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerDied","Data":"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb"} Jan 09 11:39:34 crc kubenswrapper[4727]: I0109 11:39:34.577780 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerStarted","Data":"749709203c591e99d8095d66f17f59fc07f318ade2c0664182dbb247fc67d2b6"} Jan 09 11:39:34 crc kubenswrapper[4727]: I0109 11:39:34.580881 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:39:37 crc kubenswrapper[4727]: I0109 11:39:37.616971 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerStarted","Data":"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1"} Jan 09 11:39:40 crc kubenswrapper[4727]: I0109 11:39:40.218162 4727 scope.go:117] "RemoveContainer" containerID="5f64d88220f1b1d82b9918e2291ce525e4c04f813c3c15cef5b23695873610e0" Jan 09 11:39:40 crc kubenswrapper[4727]: I0109 11:39:40.247896 4727 scope.go:117] "RemoveContainer" containerID="facf79eacfe3ba27d5328b6d90303d2afa354a543a058f6550ca075191ad3c5e" Jan 09 11:39:41 crc kubenswrapper[4727]: I0109 11:39:41.658378 4727 generic.go:334] "Generic (PLEG): container finished" podID="42513cc8-0316-49f1-8062-74a805d1e27b" containerID="d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1" exitCode=0 Jan 09 11:39:41 crc kubenswrapper[4727]: I0109 11:39:41.658470 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerDied","Data":"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1"} Jan 09 11:39:44 crc kubenswrapper[4727]: I0109 11:39:44.702158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerStarted","Data":"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b"} Jan 09 11:39:44 crc kubenswrapper[4727]: I0109 11:39:44.728440 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q8jd5" podStartSLOduration=3.735263495 podStartE2EDuration="12.728412519s" podCreationTimestamp="2026-01-09 11:39:32 +0000 UTC" firstStartedPulling="2026-01-09 11:39:34.580466881 +0000 UTC m=+3220.030371672" lastFinishedPulling="2026-01-09 11:39:43.573615915 +0000 UTC m=+3229.023520696" observedRunningTime="2026-01-09 11:39:44.721433539 +0000 UTC m=+3230.171338330" watchObservedRunningTime="2026-01-09 11:39:44.728412519 +0000 UTC m=+3230.178317310" Jan 09 11:39:44 crc kubenswrapper[4727]: I0109 11:39:44.866938 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:44 crc kubenswrapper[4727]: E0109 11:39:44.867254 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.141559 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.143108 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.208845 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.845270 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:53 crc kubenswrapper[4727]: I0109 11:39:53.905412 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:55 crc kubenswrapper[4727]: I0109 11:39:55.820829 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q8jd5" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="registry-server" containerID="cri-o://1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" gracePeriod=2 Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.366240 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.459618 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") pod \"42513cc8-0316-49f1-8062-74a805d1e27b\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.459686 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") pod \"42513cc8-0316-49f1-8062-74a805d1e27b\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.460037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") pod \"42513cc8-0316-49f1-8062-74a805d1e27b\" (UID: \"42513cc8-0316-49f1-8062-74a805d1e27b\") " Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.460369 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities" (OuterVolumeSpecName: "utilities") pod "42513cc8-0316-49f1-8062-74a805d1e27b" (UID: "42513cc8-0316-49f1-8062-74a805d1e27b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.460778 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.469089 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s" (OuterVolumeSpecName: "kube-api-access-m699s") pod "42513cc8-0316-49f1-8062-74a805d1e27b" (UID: "42513cc8-0316-49f1-8062-74a805d1e27b"). InnerVolumeSpecName "kube-api-access-m699s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.562721 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m699s\" (UniqueName: \"kubernetes.io/projected/42513cc8-0316-49f1-8062-74a805d1e27b-kube-api-access-m699s\") on node \"crc\" DevicePath \"\"" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.597568 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42513cc8-0316-49f1-8062-74a805d1e27b" (UID: "42513cc8-0316-49f1-8062-74a805d1e27b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.666167 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42513cc8-0316-49f1-8062-74a805d1e27b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.835747 4727 generic.go:334] "Generic (PLEG): container finished" podID="42513cc8-0316-49f1-8062-74a805d1e27b" containerID="1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" exitCode=0 Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.835870 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerDied","Data":"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b"} Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.836307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8jd5" event={"ID":"42513cc8-0316-49f1-8062-74a805d1e27b","Type":"ContainerDied","Data":"749709203c591e99d8095d66f17f59fc07f318ade2c0664182dbb247fc67d2b6"} Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.835887 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8jd5" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.836344 4727 scope.go:117] "RemoveContainer" containerID="1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.863012 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:39:56 crc kubenswrapper[4727]: E0109 11:39:56.863429 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.870807 4727 scope.go:117] "RemoveContainer" containerID="d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.895619 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.896863 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q8jd5"] Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.923872 4727 scope.go:117] "RemoveContainer" containerID="58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.954484 4727 scope.go:117] "RemoveContainer" containerID="1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" Jan 09 11:39:56 crc kubenswrapper[4727]: E0109 11:39:56.955175 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b\": container with ID starting with 1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b not found: ID does not exist" containerID="1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.955229 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b"} err="failed to get container status \"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b\": rpc error: code = NotFound desc = could not find container \"1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b\": container with ID starting with 1925a3f5bdff1930c5dd6a0617c2814c42e1ee1c17a266efab38d25deb1cde6b not found: ID does not exist" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.955268 4727 scope.go:117] "RemoveContainer" containerID="d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1" Jan 09 11:39:56 crc kubenswrapper[4727]: E0109 11:39:56.956134 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1\": container with ID starting with d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1 not found: ID does not exist" containerID="d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.956173 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1"} err="failed to get container status \"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1\": rpc error: code = NotFound desc = could not find container \"d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1\": container with ID starting with d030438023a4fe9b3bc11f82fc6464f2e6d5058cb24edbcde8d8873b2496faa1 not found: ID does not exist" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.956196 4727 scope.go:117] "RemoveContainer" containerID="58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb" Jan 09 11:39:56 crc kubenswrapper[4727]: E0109 11:39:56.956563 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb\": container with ID starting with 58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb not found: ID does not exist" containerID="58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb" Jan 09 11:39:56 crc kubenswrapper[4727]: I0109 11:39:56.956592 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb"} err="failed to get container status \"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb\": rpc error: code = NotFound desc = could not find container \"58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb\": container with ID starting with 58e9a3ffde2e2f2a0b638601d7ede063216e20ebf38d7d093a6899c8ee2edadb not found: ID does not exist" Jan 09 11:39:58 crc kubenswrapper[4727]: I0109 11:39:58.874194 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" path="/var/lib/kubelet/pods/42513cc8-0316-49f1-8062-74a805d1e27b/volumes" Jan 09 11:40:10 crc kubenswrapper[4727]: I0109 11:40:10.861577 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:40:10 crc kubenswrapper[4727]: E0109 11:40:10.862543 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:40:25 crc kubenswrapper[4727]: I0109 11:40:25.860131 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:40:25 crc kubenswrapper[4727]: E0109 11:40:25.860810 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:40:40 crc kubenswrapper[4727]: I0109 11:40:40.860463 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:40:41 crc kubenswrapper[4727]: I0109 11:40:41.318220 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e"} Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.594029 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:42 crc kubenswrapper[4727]: E0109 11:40:42.595474 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="extract-content" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.595498 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="extract-content" Jan 09 11:40:42 crc kubenswrapper[4727]: E0109 11:40:42.595546 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="extract-utilities" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.595560 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="extract-utilities" Jan 09 11:40:42 crc kubenswrapper[4727]: E0109 11:40:42.595590 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="registry-server" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.595602 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="registry-server" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.596076 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="42513cc8-0316-49f1-8062-74a805d1e27b" containerName="registry-server" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.598749 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.618896 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.691892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.692065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.692186 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794337 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.794993 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.817799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") pod \"redhat-marketplace-npbxg\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:42 crc kubenswrapper[4727]: I0109 11:40:42.922905 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:43 crc kubenswrapper[4727]: I0109 11:40:43.509841 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:44 crc kubenswrapper[4727]: I0109 11:40:44.353183 4727 generic.go:334] "Generic (PLEG): container finished" podID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerID="fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753" exitCode=0 Jan 09 11:40:44 crc kubenswrapper[4727]: I0109 11:40:44.353413 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerDied","Data":"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753"} Jan 09 11:40:44 crc kubenswrapper[4727]: I0109 11:40:44.353642 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerStarted","Data":"22e92d8c401319bdf05053675c6ee0f39db482299f3df225cbcda4d4a2d66309"} Jan 09 11:40:46 crc kubenswrapper[4727]: I0109 11:40:46.376372 4727 generic.go:334] "Generic (PLEG): container finished" podID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerID="e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b" exitCode=0 Jan 09 11:40:46 crc kubenswrapper[4727]: I0109 11:40:46.376478 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerDied","Data":"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b"} Jan 09 11:40:47 crc kubenswrapper[4727]: I0109 11:40:47.390287 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerStarted","Data":"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc"} Jan 09 11:40:47 crc kubenswrapper[4727]: I0109 11:40:47.411791 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npbxg" podStartSLOduration=2.947334593 podStartE2EDuration="5.411761955s" podCreationTimestamp="2026-01-09 11:40:42 +0000 UTC" firstStartedPulling="2026-01-09 11:40:44.356220763 +0000 UTC m=+3289.806125534" lastFinishedPulling="2026-01-09 11:40:46.820648115 +0000 UTC m=+3292.270552896" observedRunningTime="2026-01-09 11:40:47.411298821 +0000 UTC m=+3292.861203612" watchObservedRunningTime="2026-01-09 11:40:47.411761955 +0000 UTC m=+3292.861666746" Jan 09 11:40:52 crc kubenswrapper[4727]: I0109 11:40:52.923792 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:52 crc kubenswrapper[4727]: I0109 11:40:52.924401 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:52 crc kubenswrapper[4727]: I0109 11:40:52.992978 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:53 crc kubenswrapper[4727]: I0109 11:40:53.499346 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:53 crc kubenswrapper[4727]: I0109 11:40:53.560250 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:55 crc kubenswrapper[4727]: I0109 11:40:55.463349 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-npbxg" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="registry-server" containerID="cri-o://66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" gracePeriod=2 Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.098413 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.132255 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") pod \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.132360 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") pod \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.132386 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") pod \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\" (UID: \"e1aa2ef9-2c42-46c6-ae66-42148ff8722d\") " Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.146793 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm" (OuterVolumeSpecName: "kube-api-access-29grm") pod "e1aa2ef9-2c42-46c6-ae66-42148ff8722d" (UID: "e1aa2ef9-2c42-46c6-ae66-42148ff8722d"). InnerVolumeSpecName "kube-api-access-29grm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.150722 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities" (OuterVolumeSpecName: "utilities") pod "e1aa2ef9-2c42-46c6-ae66-42148ff8722d" (UID: "e1aa2ef9-2c42-46c6-ae66-42148ff8722d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.235643 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29grm\" (UniqueName: \"kubernetes.io/projected/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-kube-api-access-29grm\") on node \"crc\" DevicePath \"\"" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.236467 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.245147 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1aa2ef9-2c42-46c6-ae66-42148ff8722d" (UID: "e1aa2ef9-2c42-46c6-ae66-42148ff8722d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.339927 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1aa2ef9-2c42-46c6-ae66-42148ff8722d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.480691 4727 generic.go:334] "Generic (PLEG): container finished" podID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerID="66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" exitCode=0 Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.480776 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerDied","Data":"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc"} Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.480903 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npbxg" event={"ID":"e1aa2ef9-2c42-46c6-ae66-42148ff8722d","Type":"ContainerDied","Data":"22e92d8c401319bdf05053675c6ee0f39db482299f3df225cbcda4d4a2d66309"} Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.480935 4727 scope.go:117] "RemoveContainer" containerID="66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.482206 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npbxg" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.514293 4727 scope.go:117] "RemoveContainer" containerID="e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.526985 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.537761 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-npbxg"] Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.559023 4727 scope.go:117] "RemoveContainer" containerID="fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.617605 4727 scope.go:117] "RemoveContainer" containerID="66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" Jan 09 11:40:56 crc kubenswrapper[4727]: E0109 11:40:56.622436 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc\": container with ID starting with 66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc not found: ID does not exist" containerID="66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.622570 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc"} err="failed to get container status \"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc\": rpc error: code = NotFound desc = could not find container \"66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc\": container with ID starting with 66c54225bb8292958925c0db9777200bae4922443ca6815b41b2f535b7f4dbfc not found: ID does not exist" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.622874 4727 scope.go:117] "RemoveContainer" containerID="e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b" Jan 09 11:40:56 crc kubenswrapper[4727]: E0109 11:40:56.624295 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b\": container with ID starting with e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b not found: ID does not exist" containerID="e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.624383 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b"} err="failed to get container status \"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b\": rpc error: code = NotFound desc = could not find container \"e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b\": container with ID starting with e6fd819e2868e76a4651d335bd9b88138cf671bd3d76bdd7d6c1a8278bbd8b2b not found: ID does not exist" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.624455 4727 scope.go:117] "RemoveContainer" containerID="fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753" Jan 09 11:40:56 crc kubenswrapper[4727]: E0109 11:40:56.625496 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753\": container with ID starting with fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753 not found: ID does not exist" containerID="fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.625560 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753"} err="failed to get container status \"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753\": rpc error: code = NotFound desc = could not find container \"fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753\": container with ID starting with fb7532b93df7def8a142f94075643382fe275db78462f17c49bd02d97ffae753 not found: ID does not exist" Jan 09 11:40:56 crc kubenswrapper[4727]: I0109 11:40:56.872647 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" path="/var/lib/kubelet/pods/e1aa2ef9-2c42-46c6-ae66-42148ff8722d/volumes" Jan 09 11:43:09 crc kubenswrapper[4727]: I0109 11:43:09.404886 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:43:09 crc kubenswrapper[4727]: I0109 11:43:09.405675 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.545745 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:33 crc kubenswrapper[4727]: E0109 11:43:33.547218 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="extract-content" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.547240 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="extract-content" Jan 09 11:43:33 crc kubenswrapper[4727]: E0109 11:43:33.547284 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="extract-utilities" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.547293 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="extract-utilities" Jan 09 11:43:33 crc kubenswrapper[4727]: E0109 11:43:33.547324 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="registry-server" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.547334 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="registry-server" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.547681 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1aa2ef9-2c42-46c6-ae66-42148ff8722d" containerName="registry-server" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.550893 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.569878 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.662249 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.662378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.663243 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.765952 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.766346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.766397 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.766658 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.766737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.804663 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") pod \"certified-operators-q7t5l\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:33 crc kubenswrapper[4727]: I0109 11:43:33.880738 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:34 crc kubenswrapper[4727]: I0109 11:43:34.254061 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:35 crc kubenswrapper[4727]: I0109 11:43:35.108826 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9eff72c-b10f-4813-a088-89b8f592276a" containerID="a8b9b837f3d64cab9ad49691366d5443456d32949ff182ebe10f074f06271689" exitCode=0 Jan 09 11:43:35 crc kubenswrapper[4727]: I0109 11:43:35.108940 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerDied","Data":"a8b9b837f3d64cab9ad49691366d5443456d32949ff182ebe10f074f06271689"} Jan 09 11:43:35 crc kubenswrapper[4727]: I0109 11:43:35.109378 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerStarted","Data":"cfdc48fbb4ae6e8db8706e9770d3c85dd6529e3282a3c8e93a31f98df5aecc17"} Jan 09 11:43:37 crc kubenswrapper[4727]: I0109 11:43:37.134146 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9eff72c-b10f-4813-a088-89b8f592276a" containerID="057674623f5b7168f918bfb80a474162495f7bf1f3362667d12edc503c8bd12b" exitCode=0 Jan 09 11:43:37 crc kubenswrapper[4727]: I0109 11:43:37.134234 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerDied","Data":"057674623f5b7168f918bfb80a474162495f7bf1f3362667d12edc503c8bd12b"} Jan 09 11:43:38 crc kubenswrapper[4727]: I0109 11:43:38.151724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerStarted","Data":"2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4"} Jan 09 11:43:38 crc kubenswrapper[4727]: I0109 11:43:38.177041 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q7t5l" podStartSLOduration=2.636370005 podStartE2EDuration="5.17702111s" podCreationTimestamp="2026-01-09 11:43:33 +0000 UTC" firstStartedPulling="2026-01-09 11:43:35.112302885 +0000 UTC m=+3460.562207666" lastFinishedPulling="2026-01-09 11:43:37.65295399 +0000 UTC m=+3463.102858771" observedRunningTime="2026-01-09 11:43:38.173085754 +0000 UTC m=+3463.622990535" watchObservedRunningTime="2026-01-09 11:43:38.17702111 +0000 UTC m=+3463.626925891" Jan 09 11:43:39 crc kubenswrapper[4727]: I0109 11:43:39.405123 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:43:39 crc kubenswrapper[4727]: I0109 11:43:39.405690 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:43:43 crc kubenswrapper[4727]: I0109 11:43:43.881059 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:43 crc kubenswrapper[4727]: I0109 11:43:43.881932 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:43 crc kubenswrapper[4727]: I0109 11:43:43.951834 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:44 crc kubenswrapper[4727]: I0109 11:43:44.266813 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:44 crc kubenswrapper[4727]: I0109 11:43:44.336467 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:46 crc kubenswrapper[4727]: I0109 11:43:46.236204 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q7t5l" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="registry-server" containerID="cri-o://2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4" gracePeriod=2 Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.250432 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9eff72c-b10f-4813-a088-89b8f592276a" containerID="2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4" exitCode=0 Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.250622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerDied","Data":"2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4"} Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.250829 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7t5l" event={"ID":"e9eff72c-b10f-4813-a088-89b8f592276a","Type":"ContainerDied","Data":"cfdc48fbb4ae6e8db8706e9770d3c85dd6529e3282a3c8e93a31f98df5aecc17"} Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.250857 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfdc48fbb4ae6e8db8706e9770d3c85dd6529e3282a3c8e93a31f98df5aecc17" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.347112 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.485015 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") pod \"e9eff72c-b10f-4813-a088-89b8f592276a\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.485277 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") pod \"e9eff72c-b10f-4813-a088-89b8f592276a\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.485388 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") pod \"e9eff72c-b10f-4813-a088-89b8f592276a\" (UID: \"e9eff72c-b10f-4813-a088-89b8f592276a\") " Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.486907 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities" (OuterVolumeSpecName: "utilities") pod "e9eff72c-b10f-4813-a088-89b8f592276a" (UID: "e9eff72c-b10f-4813-a088-89b8f592276a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.505835 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn" (OuterVolumeSpecName: "kube-api-access-bvkvn") pod "e9eff72c-b10f-4813-a088-89b8f592276a" (UID: "e9eff72c-b10f-4813-a088-89b8f592276a"). InnerVolumeSpecName "kube-api-access-bvkvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.546875 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9eff72c-b10f-4813-a088-89b8f592276a" (UID: "e9eff72c-b10f-4813-a088-89b8f592276a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.588207 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.588275 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvkvn\" (UniqueName: \"kubernetes.io/projected/e9eff72c-b10f-4813-a088-89b8f592276a-kube-api-access-bvkvn\") on node \"crc\" DevicePath \"\"" Jan 09 11:43:47 crc kubenswrapper[4727]: I0109 11:43:47.588291 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9eff72c-b10f-4813-a088-89b8f592276a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:43:48 crc kubenswrapper[4727]: I0109 11:43:48.260182 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7t5l" Jan 09 11:43:48 crc kubenswrapper[4727]: I0109 11:43:48.301170 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:48 crc kubenswrapper[4727]: I0109 11:43:48.312974 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q7t5l"] Jan 09 11:43:48 crc kubenswrapper[4727]: I0109 11:43:48.873735 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" path="/var/lib/kubelet/pods/e9eff72c-b10f-4813-a088-89b8f592276a/volumes" Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.404990 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.405817 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.405908 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.407260 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:44:09 crc kubenswrapper[4727]: I0109 11:44:09.407333 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e" gracePeriod=600 Jan 09 11:44:10 crc kubenswrapper[4727]: I0109 11:44:10.493121 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e" exitCode=0 Jan 09 11:44:10 crc kubenswrapper[4727]: I0109 11:44:10.493186 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e"} Jan 09 11:44:10 crc kubenswrapper[4727]: I0109 11:44:10.493723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496"} Jan 09 11:44:10 crc kubenswrapper[4727]: I0109 11:44:10.493747 4727 scope.go:117] "RemoveContainer" containerID="126d0da39b29196007ca1357498c8ff512b2d51333761c7877c22e17acd9e0af" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.548184 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:17 crc kubenswrapper[4727]: E0109 11:44:17.550141 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="registry-server" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.550161 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="registry-server" Jan 09 11:44:17 crc kubenswrapper[4727]: E0109 11:44:17.550185 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="extract-content" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.550213 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="extract-content" Jan 09 11:44:17 crc kubenswrapper[4727]: E0109 11:44:17.550246 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="extract-utilities" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.550255 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="extract-utilities" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.552033 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9eff72c-b10f-4813-a088-89b8f592276a" containerName="registry-server" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.554378 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.558061 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.713858 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.714387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.714614 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.817558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.817669 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.817757 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.818601 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.818730 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.841543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") pod \"community-operators-qdnfg\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:17 crc kubenswrapper[4727]: I0109 11:44:17.924759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:18 crc kubenswrapper[4727]: I0109 11:44:18.459904 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:18 crc kubenswrapper[4727]: I0109 11:44:18.572034 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerStarted","Data":"09a9b5a0dc6b9424c9cfba3511d9e49daf69aca584d57a4611284068297cad26"} Jan 09 11:44:19 crc kubenswrapper[4727]: I0109 11:44:19.584008 4727 generic.go:334] "Generic (PLEG): container finished" podID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerID="42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5" exitCode=0 Jan 09 11:44:19 crc kubenswrapper[4727]: I0109 11:44:19.584057 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerDied","Data":"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5"} Jan 09 11:44:21 crc kubenswrapper[4727]: I0109 11:44:21.605601 4727 generic.go:334] "Generic (PLEG): container finished" podID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerID="36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5" exitCode=0 Jan 09 11:44:21 crc kubenswrapper[4727]: I0109 11:44:21.605686 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerDied","Data":"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5"} Jan 09 11:44:22 crc kubenswrapper[4727]: I0109 11:44:22.617718 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerStarted","Data":"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc"} Jan 09 11:44:22 crc kubenswrapper[4727]: I0109 11:44:22.648092 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qdnfg" podStartSLOduration=2.960647557 podStartE2EDuration="5.648064832s" podCreationTimestamp="2026-01-09 11:44:17 +0000 UTC" firstStartedPulling="2026-01-09 11:44:19.586461532 +0000 UTC m=+3505.036366313" lastFinishedPulling="2026-01-09 11:44:22.273878807 +0000 UTC m=+3507.723783588" observedRunningTime="2026-01-09 11:44:22.640361193 +0000 UTC m=+3508.090265994" watchObservedRunningTime="2026-01-09 11:44:22.648064832 +0000 UTC m=+3508.097969613" Jan 09 11:44:27 crc kubenswrapper[4727]: I0109 11:44:27.925692 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:27 crc kubenswrapper[4727]: I0109 11:44:27.926653 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:27 crc kubenswrapper[4727]: I0109 11:44:27.972670 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:28 crc kubenswrapper[4727]: I0109 11:44:28.716324 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:28 crc kubenswrapper[4727]: I0109 11:44:28.777606 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:30 crc kubenswrapper[4727]: I0109 11:44:30.690905 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qdnfg" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="registry-server" containerID="cri-o://823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" gracePeriod=2 Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.160812 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.232187 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") pod \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.232681 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") pod \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.232811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") pod \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\" (UID: \"f79b54d3-f079-42de-b8bf-baab3dc5e17d\") " Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.233452 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities" (OuterVolumeSpecName: "utilities") pod "f79b54d3-f079-42de-b8bf-baab3dc5e17d" (UID: "f79b54d3-f079-42de-b8bf-baab3dc5e17d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.239838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h" (OuterVolumeSpecName: "kube-api-access-nvf4h") pod "f79b54d3-f079-42de-b8bf-baab3dc5e17d" (UID: "f79b54d3-f079-42de-b8bf-baab3dc5e17d"). InnerVolumeSpecName "kube-api-access-nvf4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.335481 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvf4h\" (UniqueName: \"kubernetes.io/projected/f79b54d3-f079-42de-b8bf-baab3dc5e17d-kube-api-access-nvf4h\") on node \"crc\" DevicePath \"\"" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.335547 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704275 4727 generic.go:334] "Generic (PLEG): container finished" podID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerID="823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" exitCode=0 Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704336 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerDied","Data":"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc"} Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704395 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdnfg" event={"ID":"f79b54d3-f079-42de-b8bf-baab3dc5e17d","Type":"ContainerDied","Data":"09a9b5a0dc6b9424c9cfba3511d9e49daf69aca584d57a4611284068297cad26"} Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704415 4727 scope.go:117] "RemoveContainer" containerID="823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.704595 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdnfg" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.721071 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f79b54d3-f079-42de-b8bf-baab3dc5e17d" (UID: "f79b54d3-f079-42de-b8bf-baab3dc5e17d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.729066 4727 scope.go:117] "RemoveContainer" containerID="36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.745219 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b54d3-f079-42de-b8bf-baab3dc5e17d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.755974 4727 scope.go:117] "RemoveContainer" containerID="42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.815263 4727 scope.go:117] "RemoveContainer" containerID="823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" Jan 09 11:44:31 crc kubenswrapper[4727]: E0109 11:44:31.815760 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc\": container with ID starting with 823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc not found: ID does not exist" containerID="823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.815794 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc"} err="failed to get container status \"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc\": rpc error: code = NotFound desc = could not find container \"823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc\": container with ID starting with 823262a8b87cb2e49198f382a7ed02ce508f16e1c96b8d6e210a818007995cdc not found: ID does not exist" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.815822 4727 scope.go:117] "RemoveContainer" containerID="36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5" Jan 09 11:44:31 crc kubenswrapper[4727]: E0109 11:44:31.816169 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5\": container with ID starting with 36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5 not found: ID does not exist" containerID="36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.816243 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5"} err="failed to get container status \"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5\": rpc error: code = NotFound desc = could not find container \"36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5\": container with ID starting with 36192b48ec0fe79960d83814acb0b00da18e4d4c889098cf3dce3c8a0b27aca5 not found: ID does not exist" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.816281 4727 scope.go:117] "RemoveContainer" containerID="42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5" Jan 09 11:44:31 crc kubenswrapper[4727]: E0109 11:44:31.816788 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5\": container with ID starting with 42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5 not found: ID does not exist" containerID="42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5" Jan 09 11:44:31 crc kubenswrapper[4727]: I0109 11:44:31.816820 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5"} err="failed to get container status \"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5\": rpc error: code = NotFound desc = could not find container \"42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5\": container with ID starting with 42166660d9765d7a0a3c53839b68ff68eff77cb1241874eeb739617a7ccb7cc5 not found: ID does not exist" Jan 09 11:44:32 crc kubenswrapper[4727]: I0109 11:44:32.045268 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:32 crc kubenswrapper[4727]: I0109 11:44:32.056466 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qdnfg"] Jan 09 11:44:32 crc kubenswrapper[4727]: I0109 11:44:32.872351 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" path="/var/lib/kubelet/pods/f79b54d3-f079-42de-b8bf-baab3dc5e17d/volumes" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.147974 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr"] Jan 09 11:45:00 crc kubenswrapper[4727]: E0109 11:45:00.149357 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="registry-server" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.149375 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="registry-server" Jan 09 11:45:00 crc kubenswrapper[4727]: E0109 11:45:00.149400 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="extract-utilities" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.149407 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="extract-utilities" Jan 09 11:45:00 crc kubenswrapper[4727]: E0109 11:45:00.149424 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="extract-content" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.149430 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="extract-content" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.149691 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79b54d3-f079-42de-b8bf-baab3dc5e17d" containerName="registry-server" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.150549 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.211973 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.212196 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.221579 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr"] Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.315060 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.315252 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.315286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.418054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.419010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.419053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.420845 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.431236 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.438840 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") pod \"collect-profiles-29465985-hsmqr\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.536568 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:00 crc kubenswrapper[4727]: I0109 11:45:00.995821 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr"] Jan 09 11:45:01 crc kubenswrapper[4727]: I0109 11:45:01.021181 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" event={"ID":"7d8f743b-1add-4fe9-982e-0bfc6907c483","Type":"ContainerStarted","Data":"c17e6b05f48cf2e91fb574e4a942e37df0959e0df263be063e5725424be73aa7"} Jan 09 11:45:02 crc kubenswrapper[4727]: I0109 11:45:02.034461 4727 generic.go:334] "Generic (PLEG): container finished" podID="7d8f743b-1add-4fe9-982e-0bfc6907c483" containerID="4ebec5e3c30b190966b50316c5cc72ed18855f1c4ae9afca377ae1063871ec25" exitCode=0 Jan 09 11:45:02 crc kubenswrapper[4727]: I0109 11:45:02.034562 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" event={"ID":"7d8f743b-1add-4fe9-982e-0bfc6907c483","Type":"ContainerDied","Data":"4ebec5e3c30b190966b50316c5cc72ed18855f1c4ae9afca377ae1063871ec25"} Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.474027 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.592234 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") pod \"7d8f743b-1add-4fe9-982e-0bfc6907c483\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.592371 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") pod \"7d8f743b-1add-4fe9-982e-0bfc6907c483\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.592600 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") pod \"7d8f743b-1add-4fe9-982e-0bfc6907c483\" (UID: \"7d8f743b-1add-4fe9-982e-0bfc6907c483\") " Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.593613 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d8f743b-1add-4fe9-982e-0bfc6907c483" (UID: "7d8f743b-1add-4fe9-982e-0bfc6907c483"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.599647 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d8f743b-1add-4fe9-982e-0bfc6907c483" (UID: "7d8f743b-1add-4fe9-982e-0bfc6907c483"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.599746 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt" (OuterVolumeSpecName: "kube-api-access-fwsnt") pod "7d8f743b-1add-4fe9-982e-0bfc6907c483" (UID: "7d8f743b-1add-4fe9-982e-0bfc6907c483"). InnerVolumeSpecName "kube-api-access-fwsnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.712472 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d8f743b-1add-4fe9-982e-0bfc6907c483-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.712553 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwsnt\" (UniqueName: \"kubernetes.io/projected/7d8f743b-1add-4fe9-982e-0bfc6907c483-kube-api-access-fwsnt\") on node \"crc\" DevicePath \"\"" Jan 09 11:45:03 crc kubenswrapper[4727]: I0109 11:45:03.712605 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d8f743b-1add-4fe9-982e-0bfc6907c483-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.062326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" event={"ID":"7d8f743b-1add-4fe9-982e-0bfc6907c483","Type":"ContainerDied","Data":"c17e6b05f48cf2e91fb574e4a942e37df0959e0df263be063e5725424be73aa7"} Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.062388 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c17e6b05f48cf2e91fb574e4a942e37df0959e0df263be063e5725424be73aa7" Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.062426 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465985-hsmqr" Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.583015 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.594189 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465940-546ww"] Jan 09 11:45:04 crc kubenswrapper[4727]: I0109 11:45:04.872989 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4efe522-b8d6-44a6-a75b-7cb19f528323" path="/var/lib/kubelet/pods/f4efe522-b8d6-44a6-a75b-7cb19f528323/volumes" Jan 09 11:45:40 crc kubenswrapper[4727]: I0109 11:45:40.515778 4727 scope.go:117] "RemoveContainer" containerID="b65ad815096d70648fb353956b9ad150a228f000450b80449e7948a4c212e007" Jan 09 11:46:08 crc kubenswrapper[4727]: I0109 11:46:08.738826 4727 generic.go:334] "Generic (PLEG): container finished" podID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" containerID="6fd71c43d4d8330f713c6bebee4de8234126f4e73026f0f31d0a1aa516bc5ecc" exitCode=0 Jan 09 11:46:08 crc kubenswrapper[4727]: I0109 11:46:08.739004 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e","Type":"ContainerDied","Data":"6fd71c43d4d8330f713c6bebee4de8234126f4e73026f0f31d0a1aa516bc5ecc"} Jan 09 11:46:09 crc kubenswrapper[4727]: I0109 11:46:09.405906 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:46:09 crc kubenswrapper[4727]: I0109 11:46:09.406007 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.157065 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164323 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164440 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164580 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164753 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164810 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.164947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.165008 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.165042 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.165183 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") pod \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\" (UID: \"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e\") " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.169915 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.170354 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data" (OuterVolumeSpecName: "config-data") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.173067 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.174126 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.178719 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz" (OuterVolumeSpecName: "kube-api-access-dqnbz") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "kube-api-access-dqnbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.206438 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.206942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.226727 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.242899 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" (UID: "52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.266762 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267036 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqnbz\" (UniqueName: \"kubernetes.io/projected/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-kube-api-access-dqnbz\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267107 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267208 4727 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267270 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267344 4727 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267416 4727 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.267535 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.271419 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.294696 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.374321 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.762760 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e","Type":"ContainerDied","Data":"8349c448d8e6552d0e3152e0251e4b01ee6c1b1475591f37b47c5feb06d40267"} Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.762814 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8349c448d8e6552d0e3152e0251e4b01ee6c1b1475591f37b47c5feb06d40267" Jan 09 11:46:10 crc kubenswrapper[4727]: I0109 11:46:10.762884 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.634703 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 11:46:17 crc kubenswrapper[4727]: E0109 11:46:17.637446 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8f743b-1add-4fe9-982e-0bfc6907c483" containerName="collect-profiles" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.637569 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8f743b-1add-4fe9-982e-0bfc6907c483" containerName="collect-profiles" Jan 09 11:46:17 crc kubenswrapper[4727]: E0109 11:46:17.637665 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" containerName="tempest-tests-tempest-tests-runner" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.637770 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" containerName="tempest-tests-tempest-tests-runner" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.638156 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e" containerName="tempest-tests-tempest-tests-runner" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.638274 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8f743b-1add-4fe9-982e-0bfc6907c483" containerName="collect-profiles" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.639373 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.642730 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ghr4t" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.645918 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.755829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.756064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjgs\" (UniqueName: \"kubernetes.io/projected/65b47f8e-eab5-4015-9926-36dcf8a8a1f0-kube-api-access-vtjgs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.857656 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.857772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjgs\" (UniqueName: \"kubernetes.io/projected/65b47f8e-eab5-4015-9926-36dcf8a8a1f0-kube-api-access-vtjgs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.859078 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.879166 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjgs\" (UniqueName: \"kubernetes.io/projected/65b47f8e-eab5-4015-9926-36dcf8a8a1f0-kube-api-access-vtjgs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.885869 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"65b47f8e-eab5-4015-9926-36dcf8a8a1f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:17 crc kubenswrapper[4727]: I0109 11:46:17.975529 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 09 11:46:18 crc kubenswrapper[4727]: I0109 11:46:18.466996 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 09 11:46:18 crc kubenswrapper[4727]: I0109 11:46:18.469466 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:46:18 crc kubenswrapper[4727]: I0109 11:46:18.840276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"65b47f8e-eab5-4015-9926-36dcf8a8a1f0","Type":"ContainerStarted","Data":"98bc671b77252cccbb3e0727f05231734034df5290b99678df9ec8fcc0b01513"} Jan 09 11:46:19 crc kubenswrapper[4727]: I0109 11:46:19.850368 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"65b47f8e-eab5-4015-9926-36dcf8a8a1f0","Type":"ContainerStarted","Data":"7a436021427d9eeed6efd181ef88b40f28ff82051106766fa35113772a806afe"} Jan 09 11:46:19 crc kubenswrapper[4727]: I0109 11:46:19.868023 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.772279412 podStartE2EDuration="2.867999893s" podCreationTimestamp="2026-01-09 11:46:17 +0000 UTC" firstStartedPulling="2026-01-09 11:46:18.469239777 +0000 UTC m=+3623.919144558" lastFinishedPulling="2026-01-09 11:46:19.564960258 +0000 UTC m=+3625.014865039" observedRunningTime="2026-01-09 11:46:19.861925228 +0000 UTC m=+3625.311830019" watchObservedRunningTime="2026-01-09 11:46:19.867999893 +0000 UTC m=+3625.317904694" Jan 09 11:46:39 crc kubenswrapper[4727]: I0109 11:46:39.405354 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:46:39 crc kubenswrapper[4727]: I0109 11:46:39.406329 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.685241 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.689313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.693001 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dwztv"/"openshift-service-ca.crt" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.693168 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dwztv"/"default-dockercfg-bl4mm" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.693468 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dwztv"/"kube-root-ca.crt" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.725605 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.745574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.745659 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.848341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.848463 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.848951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:42 crc kubenswrapper[4727]: I0109 11:46:42.869873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") pod \"must-gather-hnbtv\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:43 crc kubenswrapper[4727]: I0109 11:46:43.016577 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:46:43 crc kubenswrapper[4727]: I0109 11:46:43.519918 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:46:44 crc kubenswrapper[4727]: I0109 11:46:44.127609 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/must-gather-hnbtv" event={"ID":"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5","Type":"ContainerStarted","Data":"71abb5ece9245a9429a415aa2a433943d1c05337ce26e0114c0f545b54ef0723"} Jan 09 11:46:51 crc kubenswrapper[4727]: I0109 11:46:51.208828 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/must-gather-hnbtv" event={"ID":"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5","Type":"ContainerStarted","Data":"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70"} Jan 09 11:46:52 crc kubenswrapper[4727]: I0109 11:46:52.219733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/must-gather-hnbtv" event={"ID":"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5","Type":"ContainerStarted","Data":"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676"} Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.002725 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dwztv/must-gather-hnbtv" podStartSLOduration=5.752159253 podStartE2EDuration="13.002697359s" podCreationTimestamp="2026-01-09 11:46:42 +0000 UTC" firstStartedPulling="2026-01-09 11:46:43.529848298 +0000 UTC m=+3648.979753079" lastFinishedPulling="2026-01-09 11:46:50.780386364 +0000 UTC m=+3656.230291185" observedRunningTime="2026-01-09 11:46:52.243856108 +0000 UTC m=+3657.693760889" watchObservedRunningTime="2026-01-09 11:46:55.002697359 +0000 UTC m=+3660.452602150" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.013278 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v6lnm"] Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.015197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.066694 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.066785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.168879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.168975 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.169392 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.212266 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") pod \"crc-debug-v6lnm\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: I0109 11:46:55.348140 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:46:55 crc kubenswrapper[4727]: W0109 11:46:55.394453 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc971b42_d720_4b3e_8686_02a203f4c925.slice/crio-a867dff3f5934ec945b0f29d5f55cbd44b739f58fa11829f36f66091f600ea55 WatchSource:0}: Error finding container a867dff3f5934ec945b0f29d5f55cbd44b739f58fa11829f36f66091f600ea55: Status 404 returned error can't find the container with id a867dff3f5934ec945b0f29d5f55cbd44b739f58fa11829f36f66091f600ea55 Jan 09 11:46:56 crc kubenswrapper[4727]: I0109 11:46:56.255360 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" event={"ID":"bc971b42-d720-4b3e-8686-02a203f4c925","Type":"ContainerStarted","Data":"a867dff3f5934ec945b0f29d5f55cbd44b739f58fa11829f36f66091f600ea55"} Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.394912 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" event={"ID":"bc971b42-d720-4b3e-8686-02a203f4c925","Type":"ContainerStarted","Data":"a112ac11dc4db3f7da8bd2c21477c48e468c17d0ca9ca0a7e790eb4767a9932c"} Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.405566 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.405641 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.405701 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.406731 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:47:09 crc kubenswrapper[4727]: I0109 11:47:09.406799 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" gracePeriod=600 Jan 09 11:47:10 crc kubenswrapper[4727]: E0109 11:47:10.106960 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.407621 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" exitCode=0 Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.409140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496"} Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.409189 4727 scope.go:117] "RemoveContainer" containerID="1281e6c9576cdc31b7396965022ec562500f334a6392057ca4d4b53402eda30e" Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.409633 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:47:10 crc kubenswrapper[4727]: E0109 11:47:10.409923 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:47:10 crc kubenswrapper[4727]: I0109 11:47:10.433389 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" podStartSLOduration=3.042423175 podStartE2EDuration="16.433368165s" podCreationTimestamp="2026-01-09 11:46:54 +0000 UTC" firstStartedPulling="2026-01-09 11:46:55.397386386 +0000 UTC m=+3660.847291167" lastFinishedPulling="2026-01-09 11:47:08.788331376 +0000 UTC m=+3674.238236157" observedRunningTime="2026-01-09 11:47:10.426788387 +0000 UTC m=+3675.876693168" watchObservedRunningTime="2026-01-09 11:47:10.433368165 +0000 UTC m=+3675.883272946" Jan 09 11:47:23 crc kubenswrapper[4727]: I0109 11:47:23.861101 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:47:23 crc kubenswrapper[4727]: E0109 11:47:23.862301 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:47:38 crc kubenswrapper[4727]: I0109 11:47:38.861889 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:47:38 crc kubenswrapper[4727]: E0109 11:47:38.862723 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:47:49 crc kubenswrapper[4727]: I0109 11:47:49.870115 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:47:49 crc kubenswrapper[4727]: E0109 11:47:49.871446 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:00 crc kubenswrapper[4727]: I0109 11:48:00.861292 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:00 crc kubenswrapper[4727]: E0109 11:48:00.862307 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:05 crc kubenswrapper[4727]: I0109 11:48:05.022728 4727 generic.go:334] "Generic (PLEG): container finished" podID="bc971b42-d720-4b3e-8686-02a203f4c925" containerID="a112ac11dc4db3f7da8bd2c21477c48e468c17d0ca9ca0a7e790eb4767a9932c" exitCode=0 Jan 09 11:48:05 crc kubenswrapper[4727]: I0109 11:48:05.023183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" event={"ID":"bc971b42-d720-4b3e-8686-02a203f4c925","Type":"ContainerDied","Data":"a112ac11dc4db3f7da8bd2c21477c48e468c17d0ca9ca0a7e790eb4767a9932c"} Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.138743 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.184861 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v6lnm"] Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.197229 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v6lnm"] Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.326846 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") pod \"bc971b42-d720-4b3e-8686-02a203f4c925\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.327179 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") pod \"bc971b42-d720-4b3e-8686-02a203f4c925\" (UID: \"bc971b42-d720-4b3e-8686-02a203f4c925\") " Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.327891 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host" (OuterVolumeSpecName: "host") pod "bc971b42-d720-4b3e-8686-02a203f4c925" (UID: "bc971b42-d720-4b3e-8686-02a203f4c925"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.356250 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62" (OuterVolumeSpecName: "kube-api-access-pnq62") pod "bc971b42-d720-4b3e-8686-02a203f4c925" (UID: "bc971b42-d720-4b3e-8686-02a203f4c925"). InnerVolumeSpecName "kube-api-access-pnq62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.430760 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc971b42-d720-4b3e-8686-02a203f4c925-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.430814 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnq62\" (UniqueName: \"kubernetes.io/projected/bc971b42-d720-4b3e-8686-02a203f4c925-kube-api-access-pnq62\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:06 crc kubenswrapper[4727]: I0109 11:48:06.876816 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc971b42-d720-4b3e-8686-02a203f4c925" path="/var/lib/kubelet/pods/bc971b42-d720-4b3e-8686-02a203f4c925/volumes" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.051317 4727 scope.go:117] "RemoveContainer" containerID="a112ac11dc4db3f7da8bd2c21477c48e468c17d0ca9ca0a7e790eb4767a9932c" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.051378 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v6lnm" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.372955 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwztv/crc-debug-8nxkz"] Jan 09 11:48:07 crc kubenswrapper[4727]: E0109 11:48:07.374698 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc971b42-d720-4b3e-8686-02a203f4c925" containerName="container-00" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.374798 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc971b42-d720-4b3e-8686-02a203f4c925" containerName="container-00" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.375262 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc971b42-d720-4b3e-8686-02a203f4c925" containerName="container-00" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.376253 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.555365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.555612 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.657905 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.658029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.658167 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.690126 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") pod \"crc-debug-8nxkz\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: I0109 11:48:07.695860 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:07 crc kubenswrapper[4727]: W0109 11:48:07.731923 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68fefaa2_2054_4a47_afbd_3ef34e97798b.slice/crio-5edddcc1cebf164828f5c4f9f4289bb5927bc36fa1658252f27014fe41b4fd3f WatchSource:0}: Error finding container 5edddcc1cebf164828f5c4f9f4289bb5927bc36fa1658252f27014fe41b4fd3f: Status 404 returned error can't find the container with id 5edddcc1cebf164828f5c4f9f4289bb5927bc36fa1658252f27014fe41b4fd3f Jan 09 11:48:08 crc kubenswrapper[4727]: I0109 11:48:08.065938 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" event={"ID":"68fefaa2-2054-4a47-afbd-3ef34e97798b","Type":"ContainerStarted","Data":"5edddcc1cebf164828f5c4f9f4289bb5927bc36fa1658252f27014fe41b4fd3f"} Jan 09 11:48:09 crc kubenswrapper[4727]: I0109 11:48:09.077470 4727 generic.go:334] "Generic (PLEG): container finished" podID="68fefaa2-2054-4a47-afbd-3ef34e97798b" containerID="5d1b721da3806073c99a9aefddf0506e578acb1644038ff177b539b16fe78408" exitCode=0 Jan 09 11:48:09 crc kubenswrapper[4727]: I0109 11:48:09.077556 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" event={"ID":"68fefaa2-2054-4a47-afbd-3ef34e97798b","Type":"ContainerDied","Data":"5d1b721da3806073c99a9aefddf0506e578acb1644038ff177b539b16fe78408"} Jan 09 11:48:09 crc kubenswrapper[4727]: I0109 11:48:09.654337 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-8nxkz"] Jan 09 11:48:09 crc kubenswrapper[4727]: I0109 11:48:09.666519 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-8nxkz"] Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.230924 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.422343 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") pod \"68fefaa2-2054-4a47-afbd-3ef34e97798b\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.422894 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") pod \"68fefaa2-2054-4a47-afbd-3ef34e97798b\" (UID: \"68fefaa2-2054-4a47-afbd-3ef34e97798b\") " Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.422661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host" (OuterVolumeSpecName: "host") pod "68fefaa2-2054-4a47-afbd-3ef34e97798b" (UID: "68fefaa2-2054-4a47-afbd-3ef34e97798b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.423783 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68fefaa2-2054-4a47-afbd-3ef34e97798b-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.429885 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b" (OuterVolumeSpecName: "kube-api-access-b966b") pod "68fefaa2-2054-4a47-afbd-3ef34e97798b" (UID: "68fefaa2-2054-4a47-afbd-3ef34e97798b"). InnerVolumeSpecName "kube-api-access-b966b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.526018 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b966b\" (UniqueName: \"kubernetes.io/projected/68fefaa2-2054-4a47-afbd-3ef34e97798b-kube-api-access-b966b\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.846252 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v8xfq"] Jan 09 11:48:10 crc kubenswrapper[4727]: E0109 11:48:10.846866 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68fefaa2-2054-4a47-afbd-3ef34e97798b" containerName="container-00" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.846896 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="68fefaa2-2054-4a47-afbd-3ef34e97798b" containerName="container-00" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.847124 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="68fefaa2-2054-4a47-afbd-3ef34e97798b" containerName="container-00" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.848143 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:10 crc kubenswrapper[4727]: I0109 11:48:10.878434 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68fefaa2-2054-4a47-afbd-3ef34e97798b" path="/var/lib/kubelet/pods/68fefaa2-2054-4a47-afbd-3ef34e97798b/volumes" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.038237 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.038324 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.102576 4727 scope.go:117] "RemoveContainer" containerID="5d1b721da3806073c99a9aefddf0506e578acb1644038ff177b539b16fe78408" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.102656 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-8nxkz" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.141612 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.142225 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.142018 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.162350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") pod \"crc-debug-v8xfq\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: I0109 11:48:11.169408 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:11 crc kubenswrapper[4727]: W0109 11:48:11.210277 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod981355da_ce46_4790_9eea_9af34f7cc603.slice/crio-7cc89d54bd15edca3fac18778b05a5c54a3e99936f400de01b952f18b8c5f623 WatchSource:0}: Error finding container 7cc89d54bd15edca3fac18778b05a5c54a3e99936f400de01b952f18b8c5f623: Status 404 returned error can't find the container with id 7cc89d54bd15edca3fac18778b05a5c54a3e99936f400de01b952f18b8c5f623 Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.118687 4727 generic.go:334] "Generic (PLEG): container finished" podID="981355da-ce46-4790-9eea-9af34f7cc603" containerID="9f9bc3e161ad92af21f9f16003aee0e88064ed893d40b8c96b64d85820d1df81" exitCode=0 Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.118749 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" event={"ID":"981355da-ce46-4790-9eea-9af34f7cc603","Type":"ContainerDied","Data":"9f9bc3e161ad92af21f9f16003aee0e88064ed893d40b8c96b64d85820d1df81"} Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.118790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" event={"ID":"981355da-ce46-4790-9eea-9af34f7cc603","Type":"ContainerStarted","Data":"7cc89d54bd15edca3fac18778b05a5c54a3e99936f400de01b952f18b8c5f623"} Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.163965 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v8xfq"] Jan 09 11:48:12 crc kubenswrapper[4727]: I0109 11:48:12.176402 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwztv/crc-debug-v8xfq"] Jan 09 11:48:13 crc kubenswrapper[4727]: I0109 11:48:13.875203 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.059773 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") pod \"981355da-ce46-4790-9eea-9af34f7cc603\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.060000 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") pod \"981355da-ce46-4790-9eea-9af34f7cc603\" (UID: \"981355da-ce46-4790-9eea-9af34f7cc603\") " Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.060125 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host" (OuterVolumeSpecName: "host") pod "981355da-ce46-4790-9eea-9af34f7cc603" (UID: "981355da-ce46-4790-9eea-9af34f7cc603"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.060765 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/981355da-ce46-4790-9eea-9af34f7cc603-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.069874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6" (OuterVolumeSpecName: "kube-api-access-gdpp6") pod "981355da-ce46-4790-9eea-9af34f7cc603" (UID: "981355da-ce46-4790-9eea-9af34f7cc603"). InnerVolumeSpecName "kube-api-access-gdpp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.163002 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdpp6\" (UniqueName: \"kubernetes.io/projected/981355da-ce46-4790-9eea-9af34f7cc603-kube-api-access-gdpp6\") on node \"crc\" DevicePath \"\"" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.771374 4727 scope.go:117] "RemoveContainer" containerID="9f9bc3e161ad92af21f9f16003aee0e88064ed893d40b8c96b64d85820d1df81" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.771640 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/crc-debug-v8xfq" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.867169 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:14 crc kubenswrapper[4727]: E0109 11:48:14.867926 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:14 crc kubenswrapper[4727]: I0109 11:48:14.887423 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981355da-ce46-4790-9eea-9af34f7cc603" path="/var/lib/kubelet/pods/981355da-ce46-4790-9eea-9af34f7cc603/volumes" Jan 09 11:48:26 crc kubenswrapper[4727]: I0109 11:48:26.862615 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:26 crc kubenswrapper[4727]: E0109 11:48:26.863545 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:29 crc kubenswrapper[4727]: I0109 11:48:29.622104 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5456d7bfcd-5bs8c_fef4869f-d107-4f5b-a136-166de8ac7a69/barbican-api/0.log" Jan 09 11:48:29 crc kubenswrapper[4727]: I0109 11:48:29.853072 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5456d7bfcd-5bs8c_fef4869f-d107-4f5b-a136-166de8ac7a69/barbican-api-log/0.log" Jan 09 11:48:29 crc kubenswrapper[4727]: I0109 11:48:29.883994 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d89df6ff4-gzcbx_b166264d-8575-47af-88f1-c569c71c84f1/barbican-keystone-listener/0.log" Jan 09 11:48:29 crc kubenswrapper[4727]: I0109 11:48:29.922182 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d89df6ff4-gzcbx_b166264d-8575-47af-88f1-c569c71c84f1/barbican-keystone-listener-log/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.114745 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76fd5dd86c-tmlx2_97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8/barbican-worker/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.122583 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76fd5dd86c-tmlx2_97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8/barbican-worker-log/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.366757 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/ceilometer-central-agent/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.406770 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc_23e25abc-b16a-4273-846e-7fab7ef1a095/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.462245 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/ceilometer-notification-agent/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.586385 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/proxy-httpd/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.626486 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/sg-core/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.715979 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a36e4825-82aa-4263-a757-807b3c43d2fa/cinder-api/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.806393 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a36e4825-82aa-4263-a757-807b3c43d2fa/cinder-api-log/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.929465 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e69c5def-7abe-4486-b548-323e0416cc83/cinder-scheduler/0.log" Jan 09 11:48:30 crc kubenswrapper[4727]: I0109 11:48:30.995879 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e69c5def-7abe-4486-b548-323e0416cc83/probe/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.173496 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-x2djn_f1169cca-13ce-4a18-8901-faa73fc5b913/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.251001 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2l88s_fc6114d6-7052-46b3-a8e5-c8b9731cc92c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.420096 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/init/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.620017 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/init/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.686816 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/dnsmasq-dns/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.890689 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz_79cfc519-9725-4957-b42c-d262651895a3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.945947 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a/glance-log/0.log" Jan 09 11:48:31 crc kubenswrapper[4727]: I0109 11:48:31.985242 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a/glance-httpd/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.435140 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_992ca8ba-ec96-4dc0-9442-464cbdce8afc/glance-log/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.438437 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_992ca8ba-ec96-4dc0-9442-464cbdce8afc/glance-httpd/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.695059 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57c89666d8-8fhd6_89031be7-ef50-45c8-b43f-b34f66012f21/horizon/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.850820 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qplw9_a4f9d22c-83b0-4c0c-95e3-a2b2937908db/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:32 crc kubenswrapper[4727]: I0109 11:48:32.986766 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57c89666d8-8fhd6_89031be7-ef50-45c8-b43f-b34f66012f21/horizon-log/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.010472 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qs4rr_e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.351483 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3/kube-state-metrics/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.401954 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-666857844b-c2hp6_3738e7aa-d182-43a0-962c-b735526851f2/keystone-api/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.540059 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-zs24v_a56270d2-f80b-4dda-a64c-fe39d4b4a9e5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.814089 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8db497957-k8d9r_434346b3-08dc-43a6-aed9-3c00672c0c35/neutron-api/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.918899 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8db497957-k8d9r_434346b3-08dc-43a6-aed9-3c00672c0c35/neutron-httpd/0.log" Jan 09 11:48:33 crc kubenswrapper[4727]: I0109 11:48:33.970905 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82_92bbfcf1-befd-42df-a532-97f9a3bd22d0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.496136 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7bfcd192-734d-4709-b2c3-9abafc15a30e/nova-api-log/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.574795 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3aab78e7-6f64-4c9e-bb37-f670092f06eb/nova-cell0-conductor-conductor/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.759903 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7bfcd192-734d-4709-b2c3-9abafc15a30e/nova-api-api/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.826167 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6a601271-3d79-4446-bc6f-81b4490541f4/nova-cell1-conductor-conductor/0.log" Jan 09 11:48:34 crc kubenswrapper[4727]: I0109 11:48:34.991535 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7275705c-d408-4eb4-af28-b9b51403b913/nova-cell1-novncproxy-novncproxy/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.234657 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-s9spc_291b6783-3c71-4449-b696-27c7c340c41a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.355066 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c6024d35-671e-4814-9c13-de9897a984ee/nova-metadata-log/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.713613 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1203f055-468b-48e1-b859-78a4d11d5034/nova-scheduler-scheduler/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.784930 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/mysql-bootstrap/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.930799 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/mysql-bootstrap/0.log" Jan 09 11:48:35 crc kubenswrapper[4727]: I0109 11:48:35.976273 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/galera/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.157149 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/mysql-bootstrap/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.370657 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/mysql-bootstrap/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.460048 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/galera/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.611351 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_06c8d5e8-c424-4b08-98a2-8e89fa5a27b4/openstackclient/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.687744 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p58fw_ede60be2-7d1e-482a-b994-6c552d322575/openstack-network-exporter/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.792749 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c6024d35-671e-4814-9c13-de9897a984ee/nova-metadata-metadata/0.log" Jan 09 11:48:36 crc kubenswrapper[4727]: I0109 11:48:36.966569 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mwrp2_d81594ff-04f5-47c2-9620-db583609e9aa/ovn-controller/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.056547 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server-init/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.331322 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server-init/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.343497 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovs-vswitchd/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.374589 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.584874 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5504697e-8969-45f2-92c6-3aba8688de1a/openstack-network-exporter/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.636482 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rhzcm_5ebde73e-573e-4b52-b779-dd3cd03761e0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.758872 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5504697e-8969-45f2-92c6-3aba8688de1a/ovn-northd/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.869807 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e25e0da-05c1-4d2e-8e27-c795be192a77/ovsdbserver-nb/0.log" Jan 09 11:48:37 crc kubenswrapper[4727]: I0109 11:48:37.964158 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e25e0da-05c1-4d2e-8e27-c795be192a77/openstack-network-exporter/0.log" Jan 09 11:48:38 crc kubenswrapper[4727]: I0109 11:48:38.075372 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8/openstack-network-exporter/0.log" Jan 09 11:48:38 crc kubenswrapper[4727]: I0109 11:48:38.115823 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8/ovsdbserver-sb/0.log" Jan 09 11:48:38 crc kubenswrapper[4727]: I0109 11:48:38.815872 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c4f6b76d-7zrx8_f588c09f-34b7-4bf1-89f2-0f967cf6ddd6/placement-api/0.log" Jan 09 11:48:38 crc kubenswrapper[4727]: I0109 11:48:38.840262 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c4f6b76d-7zrx8_f588c09f-34b7-4bf1-89f2-0f967cf6ddd6/placement-log/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.028413 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/setup-container/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.277478 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/setup-container/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.299157 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/rabbitmq/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.385200 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/setup-container/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.558866 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/setup-container/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.765269 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd_72a53995-d5d0-4795-a1c7-f8a570a0ff6a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.771582 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/rabbitmq/0.log" Jan 09 11:48:39 crc kubenswrapper[4727]: I0109 11:48:39.942568 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4zggm_ce764242-0f23-4580-87ee-9f0f2f81fb0e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.053525 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv_d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.212781 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-27qwg_6f717d58-9e42-4359-89e8-70a60345d546/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.341129 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9n6wb_247ff33e-a764-4e75-9d54-2c45ae8d8ca7/ssh-known-hosts-edpm-deployment/0.log" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.859792 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:40 crc kubenswrapper[4727]: E0109 11:48:40.860193 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:48:40 crc kubenswrapper[4727]: I0109 11:48:40.971394 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67d6487995-f424z_f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb/proxy-server/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.076380 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-t2qwp_5a7df215-53c5-4771-95de-9af59255b3de/swift-ring-rebalance/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.095703 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67d6487995-f424z_f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb/proxy-httpd/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.264058 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-auditor/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.340330 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-reaper/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.376273 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-replicator/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.456466 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-server/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.501861 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-auditor/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.660592 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-updater/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.691819 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-server/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.692709 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-replicator/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.741270 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-auditor/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.891269 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-expirer/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.923594 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-replicator/0.log" Jan 09 11:48:41 crc kubenswrapper[4727]: I0109 11:48:41.976324 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-server/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.072009 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-updater/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.124671 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/rsync/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.238897 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/swift-recon-cron/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.440787 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5_2d4033a7-e7a4-495b-bbb9-63e8ae1189bc/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.541648 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e/tempest-tests-tempest-tests-runner/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.755372 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_65b47f8e-eab5-4015-9926-36dcf8a8a1f0/test-operator-logs-container/0.log" Jan 09 11:48:42 crc kubenswrapper[4727]: I0109 11:48:42.833189 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-m4njz_6811cbf2-94eb-44a0-ae3e-8f0e35163df5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:48:51 crc kubenswrapper[4727]: I0109 11:48:51.015218 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0e6e8606-58f3-4640-939b-afa25ce1ce03/memcached/0.log" Jan 09 11:48:51 crc kubenswrapper[4727]: I0109 11:48:51.861490 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:48:51 crc kubenswrapper[4727]: E0109 11:48:51.861800 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:05 crc kubenswrapper[4727]: I0109 11:49:05.860663 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:49:05 crc kubenswrapper[4727]: E0109 11:49:05.861797 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:09 crc kubenswrapper[4727]: I0109 11:49:09.651910 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-nd7lx_f57a8b19-1f94-4cc4-af28-f7c506f93de5/manager/0.log" Jan 09 11:49:09 crc kubenswrapper[4727]: I0109 11:49:09.756122 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-l25ck_63639485-2ddb-4983-921a-9de5dda98f0f/manager/0.log" Jan 09 11:49:09 crc kubenswrapper[4727]: I0109 11:49:09.861341 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-l4fld_e8c91cda-4264-401f-83de-20ddcf5f0d4d/manager/0.log" Jan 09 11:49:09 crc kubenswrapper[4727]: I0109 11:49:09.957455 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.144626 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.170170 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.185371 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.337447 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.337830 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.413043 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/extract/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.579460 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-s49vr_9891b17e-81f9-4999-b489-db3e162c2a54/manager/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.598428 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-w5c7d_9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6/manager/0.log" Jan 09 11:49:10 crc kubenswrapper[4727]: I0109 11:49:10.840800 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-nxc7n_51db22df-3d25-4c12-b104-eb3848940958/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.064989 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-g5ckd_e4480343-1920-4926-8668-e47e5bbfb646/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.083906 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-qpmcd_24886819-7c1f-4b1f-880e-4b2102e302c1/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.297792 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-4nzmw_6040cced-684e-4521-9c4e-1debba9d5320/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.311887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-6gtz5_ddfee9e4-1084-4750-ab19-473dde7a2fb6/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.575258 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-4dv6h_e604d4a1-bf95-49df-a854-b15337b7fae7/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.583305 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-q8wx7_848b9588-10d2-4bd4-bcc0-cccd55334c85/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.780936 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-pnk72_fab7e320-c116-4603-9aac-2e310be1b209/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.800734 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-69kx5_9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb/manager/0.log" Jan 09 11:49:11 crc kubenswrapper[4727]: I0109 11:49:11.954429 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh_3550e1cd-642e-481c-b98f-b6d3770f51ca/manager/0.log" Jan 09 11:49:12 crc kubenswrapper[4727]: I0109 11:49:12.355000 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cj5kr_26bfbd30-40a2-466a-862d-6cdf25911f85/registry-server/0.log" Jan 09 11:49:12 crc kubenswrapper[4727]: I0109 11:49:12.556823 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-75c59d454f-d829c_f749f148-ae4b-475b-90d9-1028d134d57c/operator/0.log" Jan 09 11:49:12 crc kubenswrapper[4727]: I0109 11:49:12.709040 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-gkkm4_558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6/manager/0.log" Jan 09 11:49:12 crc kubenswrapper[4727]: I0109 11:49:12.856187 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-cc8k9_15c1d49b-c086-4c30-9a99-e0fb597dd76f/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.174156 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2m6mz_ee5399a2-4352-4013-9c26-a40e4bc815e3/operator/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.265871 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-vgcgj_ba0be6cc-1e31-4421-aa33-1e2514069376/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.569598 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-m8s9d_e3f94965-fce3-4e35-9f97-5047e05dd50a/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.574724 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-x4r9z_c371fa9c-dd02-4673-99aa-4ec8fa8d9e07/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.710286 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7db9fd4464-5h9ft_6a33b307-e521-43c4-8e35-3e9d7d553716/manager/0.log" Jan 09 11:49:13 crc kubenswrapper[4727]: I0109 11:49:13.811406 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-jvkn5_9300f2a9-97a8-4868-9485-8dd5d51df39e/manager/0.log" Jan 09 11:49:18 crc kubenswrapper[4727]: I0109 11:49:18.860470 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:49:18 crc kubenswrapper[4727]: E0109 11:49:18.861631 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:33 crc kubenswrapper[4727]: I0109 11:49:33.746171 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w6pvx_879d1222-addb-406a-b8fd-3ce4068c1d08/control-plane-machine-set-operator/0.log" Jan 09 11:49:33 crc kubenswrapper[4727]: I0109 11:49:33.836086 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9b2sc_ff5b64d7-46ec-4f56-a044-4b57c96ebc03/machine-api-operator/0.log" Jan 09 11:49:33 crc kubenswrapper[4727]: I0109 11:49:33.861256 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:49:33 crc kubenswrapper[4727]: E0109 11:49:33.861664 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:33 crc kubenswrapper[4727]: I0109 11:49:33.887415 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9b2sc_ff5b64d7-46ec-4f56-a044-4b57c96ebc03/kube-rbac-proxy/0.log" Jan 09 11:49:40 crc kubenswrapper[4727]: I0109 11:49:40.701447 4727 scope.go:117] "RemoveContainer" containerID="a8b9b837f3d64cab9ad49691366d5443456d32949ff182ebe10f074f06271689" Jan 09 11:49:40 crc kubenswrapper[4727]: I0109 11:49:40.727197 4727 scope.go:117] "RemoveContainer" containerID="2f3a8912f452e870ff284e85507aa7e2cb5e67dc97fa6f73f6097f0b62c7f0d4" Jan 09 11:49:40 crc kubenswrapper[4727]: I0109 11:49:40.780888 4727 scope.go:117] "RemoveContainer" containerID="057674623f5b7168f918bfb80a474162495f7bf1f3362667d12edc503c8bd12b" Jan 09 11:49:45 crc kubenswrapper[4727]: I0109 11:49:45.860312 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:49:45 crc kubenswrapper[4727]: E0109 11:49:45.861158 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:49:46 crc kubenswrapper[4727]: I0109 11:49:46.737152 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2qqks_2715d39f-d488-448b-b6f2-ff592dea195a/cert-manager-controller/0.log" Jan 09 11:49:46 crc kubenswrapper[4727]: I0109 11:49:46.890955 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cbsgr_3a45eda8-4151-4b6c-b0f2-ab6416dc34e9/cert-manager-cainjector/0.log" Jan 09 11:49:46 crc kubenswrapper[4727]: I0109 11:49:46.981681 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qlfjg_5cee0bf6-27dd-4944-bbef-574afbae1542/cert-manager-webhook/0.log" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.854248 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:49:51 crc kubenswrapper[4727]: E0109 11:49:51.869368 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981355da-ce46-4790-9eea-9af34f7cc603" containerName="container-00" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.869444 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="981355da-ce46-4790-9eea-9af34f7cc603" containerName="container-00" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.871106 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="981355da-ce46-4790-9eea-9af34f7cc603" containerName="container-00" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.879062 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.894429 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.926923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.927036 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:51 crc kubenswrapper[4727]: I0109 11:49:51.927110 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.031382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.031481 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.031677 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.032127 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.032220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.055273 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") pod \"redhat-operators-tqmcx\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.211716 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:49:52 crc kubenswrapper[4727]: I0109 11:49:52.734674 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:49:53 crc kubenswrapper[4727]: I0109 11:49:53.753815 4727 generic.go:334] "Generic (PLEG): container finished" podID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerID="1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1" exitCode=0 Jan 09 11:49:53 crc kubenswrapper[4727]: I0109 11:49:53.753933 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerDied","Data":"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1"} Jan 09 11:49:53 crc kubenswrapper[4727]: I0109 11:49:53.754372 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerStarted","Data":"43fa514c238a858568bd3ffb421a7fbd82c4411a3475d73032ca455b3c2d1d6c"} Jan 09 11:49:55 crc kubenswrapper[4727]: I0109 11:49:55.787705 4727 generic.go:334] "Generic (PLEG): container finished" podID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerID="525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23" exitCode=0 Jan 09 11:49:55 crc kubenswrapper[4727]: I0109 11:49:55.787841 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerDied","Data":"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23"} Jan 09 11:49:57 crc kubenswrapper[4727]: I0109 11:49:57.814686 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerStarted","Data":"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15"} Jan 09 11:49:57 crc kubenswrapper[4727]: I0109 11:49:57.843695 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tqmcx" podStartSLOduration=4.331636172 podStartE2EDuration="6.843671684s" podCreationTimestamp="2026-01-09 11:49:51 +0000 UTC" firstStartedPulling="2026-01-09 11:49:53.755961887 +0000 UTC m=+3839.205866668" lastFinishedPulling="2026-01-09 11:49:56.267997399 +0000 UTC m=+3841.717902180" observedRunningTime="2026-01-09 11:49:57.836590341 +0000 UTC m=+3843.286495132" watchObservedRunningTime="2026-01-09 11:49:57.843671684 +0000 UTC m=+3843.293576465" Jan 09 11:50:00 crc kubenswrapper[4727]: I0109 11:50:00.860357 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:00 crc kubenswrapper[4727]: E0109 11:50:00.861148 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.202850 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-6dwzn_9721a7da-2c8a-4a0d-ac56-8b4b11c028cd/nmstate-console-plugin/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.541111 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4757d_673fefde-8c1b-46fe-a88a-00b3fa962a3e/nmstate-handler/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.588935 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-txtbd_0683f840-0540-443e-8f9d-123b701acbd7/kube-rbac-proxy/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.634452 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-txtbd_0683f840-0540-443e-8f9d-123b701acbd7/nmstate-metrics/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.822623 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-p86wv_b4c7550e-1eaa-4e85-b44d-c752f6e37955/nmstate-operator/0.log" Jan 09 11:50:01 crc kubenswrapper[4727]: I0109 11:50:01.956027 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-5lc88_7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac/nmstate-webhook/0.log" Jan 09 11:50:02 crc kubenswrapper[4727]: I0109 11:50:02.213423 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:02 crc kubenswrapper[4727]: I0109 11:50:02.213995 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:03 crc kubenswrapper[4727]: I0109 11:50:03.265063 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tqmcx" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" probeResult="failure" output=< Jan 09 11:50:03 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Jan 09 11:50:03 crc kubenswrapper[4727]: > Jan 09 11:50:12 crc kubenswrapper[4727]: I0109 11:50:12.290919 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:12 crc kubenswrapper[4727]: I0109 11:50:12.397452 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:12 crc kubenswrapper[4727]: I0109 11:50:12.636648 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.008785 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tqmcx" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" containerID="cri-o://8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" gracePeriod=2 Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.530870 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.590602 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") pod \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.590776 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") pod \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.591953 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities" (OuterVolumeSpecName: "utilities") pod "4dcc81d9-6a03-4adf-a867-77fbb1589a0e" (UID: "4dcc81d9-6a03-4adf-a867-77fbb1589a0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.592095 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") pod \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\" (UID: \"4dcc81d9-6a03-4adf-a867-77fbb1589a0e\") " Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.594315 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.600837 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f" (OuterVolumeSpecName: "kube-api-access-jvk9f") pod "4dcc81d9-6a03-4adf-a867-77fbb1589a0e" (UID: "4dcc81d9-6a03-4adf-a867-77fbb1589a0e"). InnerVolumeSpecName "kube-api-access-jvk9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.696501 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvk9f\" (UniqueName: \"kubernetes.io/projected/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-kube-api-access-jvk9f\") on node \"crc\" DevicePath \"\"" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.711357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dcc81d9-6a03-4adf-a867-77fbb1589a0e" (UID: "4dcc81d9-6a03-4adf-a867-77fbb1589a0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.797829 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc81d9-6a03-4adf-a867-77fbb1589a0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:50:14 crc kubenswrapper[4727]: I0109 11:50:14.867610 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:14 crc kubenswrapper[4727]: E0109 11:50:14.867959 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022720 4727 generic.go:334] "Generic (PLEG): container finished" podID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerID="8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" exitCode=0 Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022781 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerDied","Data":"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15"} Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqmcx" event={"ID":"4dcc81d9-6a03-4adf-a867-77fbb1589a0e","Type":"ContainerDied","Data":"43fa514c238a858568bd3ffb421a7fbd82c4411a3475d73032ca455b3c2d1d6c"} Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022856 4727 scope.go:117] "RemoveContainer" containerID="8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.022863 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqmcx" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.046640 4727 scope.go:117] "RemoveContainer" containerID="525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.059036 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.070768 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tqmcx"] Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.081196 4727 scope.go:117] "RemoveContainer" containerID="1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.123011 4727 scope.go:117] "RemoveContainer" containerID="8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" Jan 09 11:50:15 crc kubenswrapper[4727]: E0109 11:50:15.123854 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15\": container with ID starting with 8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15 not found: ID does not exist" containerID="8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.123899 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15"} err="failed to get container status \"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15\": rpc error: code = NotFound desc = could not find container \"8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15\": container with ID starting with 8d33ea0dde8db4989faf22726bb0ff9919246fd10431756c5e18629f62bfcc15 not found: ID does not exist" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.123931 4727 scope.go:117] "RemoveContainer" containerID="525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23" Jan 09 11:50:15 crc kubenswrapper[4727]: E0109 11:50:15.124450 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23\": container with ID starting with 525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23 not found: ID does not exist" containerID="525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.124558 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23"} err="failed to get container status \"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23\": rpc error: code = NotFound desc = could not find container \"525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23\": container with ID starting with 525c77a3bd6379e6f8cf2fff080d3de48fd3f59b3761b40ba69e74095a3eef23 not found: ID does not exist" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.124639 4727 scope.go:117] "RemoveContainer" containerID="1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1" Jan 09 11:50:15 crc kubenswrapper[4727]: E0109 11:50:15.124986 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1\": container with ID starting with 1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1 not found: ID does not exist" containerID="1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1" Jan 09 11:50:15 crc kubenswrapper[4727]: I0109 11:50:15.125019 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1"} err="failed to get container status \"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1\": rpc error: code = NotFound desc = could not find container \"1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1\": container with ID starting with 1989a6a65f3dd6fad8715ab024f71ce5f022df8cc046e5f4901b2f2e5e16c6c1 not found: ID does not exist" Jan 09 11:50:16 crc kubenswrapper[4727]: I0109 11:50:16.874065 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" path="/var/lib/kubelet/pods/4dcc81d9-6a03-4adf-a867-77fbb1589a0e/volumes" Jan 09 11:50:27 crc kubenswrapper[4727]: I0109 11:50:27.861069 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:27 crc kubenswrapper[4727]: E0109 11:50:27.862234 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:50:31 crc kubenswrapper[4727]: I0109 11:50:31.386447 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-ljds2_da86c323-c171-499f-8e25-74532f7c1fca/kube-rbac-proxy/0.log" Jan 09 11:50:31 crc kubenswrapper[4727]: I0109 11:50:31.588634 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-ljds2_da86c323-c171-499f-8e25-74532f7c1fca/controller/0.log" Jan 09 11:50:31 crc kubenswrapper[4727]: I0109 11:50:31.679982 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-6msbv_ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee/frr-k8s-webhook-server/0.log" Jan 09 11:50:31 crc kubenswrapper[4727]: I0109 11:50:31.824967 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.011489 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.021235 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.061051 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.082895 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.272634 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.312363 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.316738 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.351369 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.543417 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.550483 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.563643 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/controller/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.581570 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.783078 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/frr-metrics/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.819155 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/kube-rbac-proxy/0.log" Jan 09 11:50:32 crc kubenswrapper[4727]: I0109 11:50:32.874347 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/kube-rbac-proxy-frr/0.log" Jan 09 11:50:33 crc kubenswrapper[4727]: I0109 11:50:33.036234 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/reloader/0.log" Jan 09 11:50:33 crc kubenswrapper[4727]: I0109 11:50:33.109193 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fc8994bc9-qg228_d7eb33c1-26fc-47be-8c5b-f235afa77ea8/manager/0.log" Jan 09 11:50:33 crc kubenswrapper[4727]: I0109 11:50:33.396597 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c5db45976-lnrnz_d3f738e6-a0bc-42cd-b4d8-71940837e09f/webhook-server/0.log" Jan 09 11:50:33 crc kubenswrapper[4727]: I0109 11:50:33.590695 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ls2r2_8ffb75e8-9dff-48d1-952b-a07637adfceb/kube-rbac-proxy/0.log" Jan 09 11:50:34 crc kubenswrapper[4727]: I0109 11:50:34.226618 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ls2r2_8ffb75e8-9dff-48d1-952b-a07637adfceb/speaker/0.log" Jan 09 11:50:34 crc kubenswrapper[4727]: I0109 11:50:34.705255 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/frr/0.log" Jan 09 11:50:39 crc kubenswrapper[4727]: I0109 11:50:39.861085 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:39 crc kubenswrapper[4727]: E0109 11:50:39.862303 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.048053 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.436912 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.437045 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.457785 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.645926 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.685192 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.752871 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/extract/0.log" Jan 09 11:50:48 crc kubenswrapper[4727]: I0109 11:50:48.877745 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.092211 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.107431 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.170023 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.410220 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.467436 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/extract/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.727048 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 11:50:49 crc kubenswrapper[4727]: I0109 11:50:49.870692 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.113020 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.118646 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-content/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.161438 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-content/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.303070 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.383103 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/extract-content/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.556878 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.749948 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-962zg_ef9e8739-e51d-4fa8-9970-ce63af133d20/registry-server/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.858771 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.924300 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 11:50:50 crc kubenswrapper[4727]: I0109 11:50:50.939962 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.132872 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.140373 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.369898 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-55prz_82b1f92b-6077-4b4c-876a-3d732a78b2cc/marketplace-operator/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.504318 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.678900 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/registry-server/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.803572 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.810651 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 11:50:51 crc kubenswrapper[4727]: I0109 11:50:51.829379 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.027443 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.054502 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.242564 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/registry-server/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.257396 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.486089 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.492223 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.493488 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.687434 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 11:50:52 crc kubenswrapper[4727]: I0109 11:50:52.713547 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 11:50:53 crc kubenswrapper[4727]: I0109 11:50:53.218816 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/registry-server/0.log" Jan 09 11:50:53 crc kubenswrapper[4727]: I0109 11:50:53.860075 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:50:53 crc kubenswrapper[4727]: E0109 11:50:53.860519 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:06 crc kubenswrapper[4727]: I0109 11:51:06.861005 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:51:06 crc kubenswrapper[4727]: E0109 11:51:06.862565 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:19 crc kubenswrapper[4727]: I0109 11:51:19.861200 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:51:19 crc kubenswrapper[4727]: E0109 11:51:19.862293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.678420 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:33 crc kubenswrapper[4727]: E0109 11:51:33.679731 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="extract-utilities" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.679746 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="extract-utilities" Jan 09 11:51:33 crc kubenswrapper[4727]: E0109 11:51:33.679755 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="extract-content" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.679763 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="extract-content" Jan 09 11:51:33 crc kubenswrapper[4727]: E0109 11:51:33.679776 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.679781 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.679982 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dcc81d9-6a03-4adf-a867-77fbb1589a0e" containerName="registry-server" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.681362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.700413 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.844523 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.844999 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.845041 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.860832 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:51:33 crc kubenswrapper[4727]: E0109 11:51:33.861118 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947116 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947243 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947783 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.947842 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:33 crc kubenswrapper[4727]: I0109 11:51:33.970809 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") pod \"redhat-marketplace-rgzzb\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:34 crc kubenswrapper[4727]: I0109 11:51:34.008373 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:34 crc kubenswrapper[4727]: I0109 11:51:34.559048 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:34 crc kubenswrapper[4727]: I0109 11:51:34.872568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerStarted","Data":"a01774070452956398e76f97df5be70efeff207ae0f8826425953d52ef7f9fb0"} Jan 09 11:51:35 crc kubenswrapper[4727]: E0109 11:51:35.134593 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2cd93f_bd92_49a9_9845_209da98d1ef1.slice/crio-cac191dfe9c8f5a5b8f32b55189cc4ddc6a42f7624d9b2e5ccdcc57770a25a95.scope\": RecentStats: unable to find data in memory cache]" Jan 09 11:51:35 crc kubenswrapper[4727]: I0109 11:51:35.885122 4727 generic.go:334] "Generic (PLEG): container finished" podID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerID="cac191dfe9c8f5a5b8f32b55189cc4ddc6a42f7624d9b2e5ccdcc57770a25a95" exitCode=0 Jan 09 11:51:35 crc kubenswrapper[4727]: I0109 11:51:35.885247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerDied","Data":"cac191dfe9c8f5a5b8f32b55189cc4ddc6a42f7624d9b2e5ccdcc57770a25a95"} Jan 09 11:51:35 crc kubenswrapper[4727]: I0109 11:51:35.887967 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:51:38 crc kubenswrapper[4727]: I0109 11:51:38.918703 4727 generic.go:334] "Generic (PLEG): container finished" podID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerID="a48ad10f14588525be52ba99a6f228bbf392891b7101a1611d23da068cec5c09" exitCode=0 Jan 09 11:51:38 crc kubenswrapper[4727]: I0109 11:51:38.919295 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerDied","Data":"a48ad10f14588525be52ba99a6f228bbf392891b7101a1611d23da068cec5c09"} Jan 09 11:51:40 crc kubenswrapper[4727]: I0109 11:51:40.957652 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerStarted","Data":"7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028"} Jan 09 11:51:44 crc kubenswrapper[4727]: I0109 11:51:44.009522 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:44 crc kubenswrapper[4727]: I0109 11:51:44.011632 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:44 crc kubenswrapper[4727]: I0109 11:51:44.069025 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:44 crc kubenswrapper[4727]: I0109 11:51:44.093839 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgzzb" podStartSLOduration=7.352611904 podStartE2EDuration="11.093815713s" podCreationTimestamp="2026-01-09 11:51:33 +0000 UTC" firstStartedPulling="2026-01-09 11:51:35.887771518 +0000 UTC m=+3941.337676299" lastFinishedPulling="2026-01-09 11:51:39.628975327 +0000 UTC m=+3945.078880108" observedRunningTime="2026-01-09 11:51:40.986637176 +0000 UTC m=+3946.436541967" watchObservedRunningTime="2026-01-09 11:51:44.093815713 +0000 UTC m=+3949.543720494" Jan 09 11:51:48 crc kubenswrapper[4727]: I0109 11:51:48.865805 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:51:48 crc kubenswrapper[4727]: E0109 11:51:48.866930 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:51:54 crc kubenswrapper[4727]: I0109 11:51:54.070810 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:54 crc kubenswrapper[4727]: I0109 11:51:54.131667 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:54 crc kubenswrapper[4727]: I0109 11:51:54.132574 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgzzb" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="registry-server" containerID="cri-o://7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028" gracePeriod=2 Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.098846 4727 generic.go:334] "Generic (PLEG): container finished" podID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerID="7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028" exitCode=0 Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.099353 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerDied","Data":"7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028"} Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.291295 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.452336 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") pod \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.452466 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") pod \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.452614 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") pod \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\" (UID: \"cb2cd93f-bd92-49a9-9845-209da98d1ef1\") " Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.453340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities" (OuterVolumeSpecName: "utilities") pod "cb2cd93f-bd92-49a9-9845-209da98d1ef1" (UID: "cb2cd93f-bd92-49a9-9845-209da98d1ef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.460773 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g" (OuterVolumeSpecName: "kube-api-access-wwn9g") pod "cb2cd93f-bd92-49a9-9845-209da98d1ef1" (UID: "cb2cd93f-bd92-49a9-9845-209da98d1ef1"). InnerVolumeSpecName "kube-api-access-wwn9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.504779 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb2cd93f-bd92-49a9-9845-209da98d1ef1" (UID: "cb2cd93f-bd92-49a9-9845-209da98d1ef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.560380 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.560455 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwn9g\" (UniqueName: \"kubernetes.io/projected/cb2cd93f-bd92-49a9-9845-209da98d1ef1-kube-api-access-wwn9g\") on node \"crc\" DevicePath \"\"" Jan 09 11:51:55 crc kubenswrapper[4727]: I0109 11:51:55.560475 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2cd93f-bd92-49a9-9845-209da98d1ef1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.112392 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzzb" event={"ID":"cb2cd93f-bd92-49a9-9845-209da98d1ef1","Type":"ContainerDied","Data":"a01774070452956398e76f97df5be70efeff207ae0f8826425953d52ef7f9fb0"} Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.112574 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzzb" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.112939 4727 scope.go:117] "RemoveContainer" containerID="7e31dca1794da8f61d4c137169973ade16676e74b1e7cf5656518b5cf85b7028" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.149907 4727 scope.go:117] "RemoveContainer" containerID="a48ad10f14588525be52ba99a6f228bbf392891b7101a1611d23da068cec5c09" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.155682 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.169842 4727 scope.go:117] "RemoveContainer" containerID="cac191dfe9c8f5a5b8f32b55189cc4ddc6a42f7624d9b2e5ccdcc57770a25a95" Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.171258 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzzb"] Jan 09 11:51:56 crc kubenswrapper[4727]: I0109 11:51:56.873673 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" path="/var/lib/kubelet/pods/cb2cd93f-bd92-49a9-9845-209da98d1ef1/volumes" Jan 09 11:52:01 crc kubenswrapper[4727]: I0109 11:52:01.860957 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:52:01 crc kubenswrapper[4727]: E0109 11:52:01.861607 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:52:16 crc kubenswrapper[4727]: I0109 11:52:16.860843 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:52:17 crc kubenswrapper[4727]: I0109 11:52:17.326097 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f"} Jan 09 11:53:02 crc kubenswrapper[4727]: I0109 11:53:02.785299 4727 generic.go:334] "Generic (PLEG): container finished" podID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" exitCode=0 Jan 09 11:53:02 crc kubenswrapper[4727]: I0109 11:53:02.785429 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwztv/must-gather-hnbtv" event={"ID":"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5","Type":"ContainerDied","Data":"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70"} Jan 09 11:53:02 crc kubenswrapper[4727]: I0109 11:53:02.786921 4727 scope.go:117] "RemoveContainer" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" Jan 09 11:53:03 crc kubenswrapper[4727]: I0109 11:53:03.462395 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwztv_must-gather-hnbtv_b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5/gather/0.log" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.219748 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.222947 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dwztv/must-gather-hnbtv" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="copy" containerID="cri-o://c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" gracePeriod=2 Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.227575 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwztv/must-gather-hnbtv"] Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.654267 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwztv_must-gather-hnbtv_b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5/copy/0.log" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.655282 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.782782 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") pod \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.782908 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") pod \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\" (UID: \"b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5\") " Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.790027 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq" (OuterVolumeSpecName: "kube-api-access-xh4rq") pod "b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" (UID: "b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5"). InnerVolumeSpecName "kube-api-access-xh4rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.885652 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh4rq\" (UniqueName: \"kubernetes.io/projected/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-kube-api-access-xh4rq\") on node \"crc\" DevicePath \"\"" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.901788 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwztv_must-gather-hnbtv_b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5/copy/0.log" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.902893 4727 generic.go:334] "Generic (PLEG): container finished" podID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerID="c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" exitCode=143 Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.902989 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwztv/must-gather-hnbtv" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.903040 4727 scope.go:117] "RemoveContainer" containerID="c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.928523 4727 scope.go:117] "RemoveContainer" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.941901 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" (UID: "b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:53:11 crc kubenswrapper[4727]: I0109 11:53:11.988265 4727 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.003330 4727 scope.go:117] "RemoveContainer" containerID="c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" Jan 09 11:53:12 crc kubenswrapper[4727]: E0109 11:53:12.004378 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676\": container with ID starting with c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676 not found: ID does not exist" containerID="c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.004442 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676"} err="failed to get container status \"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676\": rpc error: code = NotFound desc = could not find container \"c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676\": container with ID starting with c26e9522b226bb7a086c9a05aa2142d6ab0604d73e097f7d768be920cee6a676 not found: ID does not exist" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.004518 4727 scope.go:117] "RemoveContainer" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" Jan 09 11:53:12 crc kubenswrapper[4727]: E0109 11:53:12.004850 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70\": container with ID starting with 77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70 not found: ID does not exist" containerID="77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.004885 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70"} err="failed to get container status \"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70\": rpc error: code = NotFound desc = could not find container \"77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70\": container with ID starting with 77b41845902ff38c49a79b5a56ae6527f0fbc0302442c201d15a224df602dc70 not found: ID does not exist" Jan 09 11:53:12 crc kubenswrapper[4727]: I0109 11:53:12.878531 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" path="/var/lib/kubelet/pods/b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5/volumes" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.326682 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4tm96"] Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328321 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="gather" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328337 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="gather" Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328366 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="copy" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328375 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="copy" Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328395 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="extract-content" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328402 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="extract-content" Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328426 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="extract-utilities" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328434 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="extract-utilities" Jan 09 11:53:54 crc kubenswrapper[4727]: E0109 11:53:54.328454 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="registry-server" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328460 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="registry-server" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328708 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2cd93f-bd92-49a9-9845-209da98d1ef1" containerName="registry-server" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328733 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="copy" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.328746 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fbdf6c-2a38-4f07-9330-2ff6601a9eb5" containerName="gather" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.330599 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.343228 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tm96"] Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.520741 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-catalog-content\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.520951 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-utilities\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.521025 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6975\" (UniqueName: \"kubernetes.io/projected/26aacbc8-deff-4e22-931d-552244f5bfcc-kube-api-access-t6975\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.623708 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-utilities\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.623854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6975\" (UniqueName: \"kubernetes.io/projected/26aacbc8-deff-4e22-931d-552244f5bfcc-kube-api-access-t6975\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.624317 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-utilities\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.624354 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-catalog-content\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.624773 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aacbc8-deff-4e22-931d-552244f5bfcc-catalog-content\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.656398 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6975\" (UniqueName: \"kubernetes.io/projected/26aacbc8-deff-4e22-931d-552244f5bfcc-kube-api-access-t6975\") pod \"certified-operators-4tm96\" (UID: \"26aacbc8-deff-4e22-931d-552244f5bfcc\") " pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:54 crc kubenswrapper[4727]: I0109 11:53:54.662924 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:53:55 crc kubenswrapper[4727]: I0109 11:53:55.028053 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tm96"] Jan 09 11:53:55 crc kubenswrapper[4727]: I0109 11:53:55.120076 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerStarted","Data":"0e5c2069e5a99786d3be5d2e49a3f70ad6be1c1f764fdaa1dbe03f74a36d829b"} Jan 09 11:53:56 crc kubenswrapper[4727]: I0109 11:53:56.133332 4727 generic.go:334] "Generic (PLEG): container finished" podID="26aacbc8-deff-4e22-931d-552244f5bfcc" containerID="2ffcaf6d5f244e62ba5b5943d33b9a20c1499d2655517d727b7bbc96d3ee9107" exitCode=0 Jan 09 11:53:56 crc kubenswrapper[4727]: I0109 11:53:56.133454 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerDied","Data":"2ffcaf6d5f244e62ba5b5943d33b9a20c1499d2655517d727b7bbc96d3ee9107"} Jan 09 11:54:00 crc kubenswrapper[4727]: I0109 11:54:00.181022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerStarted","Data":"193e8eec92c5573dff17d7e7f9cab49b98d7f6e8564be54a99606cfbf0975025"} Jan 09 11:54:01 crc kubenswrapper[4727]: I0109 11:54:01.194152 4727 generic.go:334] "Generic (PLEG): container finished" podID="26aacbc8-deff-4e22-931d-552244f5bfcc" containerID="193e8eec92c5573dff17d7e7f9cab49b98d7f6e8564be54a99606cfbf0975025" exitCode=0 Jan 09 11:54:01 crc kubenswrapper[4727]: I0109 11:54:01.194215 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerDied","Data":"193e8eec92c5573dff17d7e7f9cab49b98d7f6e8564be54a99606cfbf0975025"} Jan 09 11:54:02 crc kubenswrapper[4727]: I0109 11:54:02.210159 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tm96" event={"ID":"26aacbc8-deff-4e22-931d-552244f5bfcc","Type":"ContainerStarted","Data":"9fdaf8f629f89ef6f8d288061c3b747b90fb871fa6476d3728fa5c8f90a3f81a"} Jan 09 11:54:02 crc kubenswrapper[4727]: I0109 11:54:02.254709 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4tm96" podStartSLOduration=2.697009149 podStartE2EDuration="8.254656876s" podCreationTimestamp="2026-01-09 11:53:54 +0000 UTC" firstStartedPulling="2026-01-09 11:53:56.136345532 +0000 UTC m=+4081.586250313" lastFinishedPulling="2026-01-09 11:54:01.693993259 +0000 UTC m=+4087.143898040" observedRunningTime="2026-01-09 11:54:02.243486883 +0000 UTC m=+4087.693391754" watchObservedRunningTime="2026-01-09 11:54:02.254656876 +0000 UTC m=+4087.704561667" Jan 09 11:54:04 crc kubenswrapper[4727]: I0109 11:54:04.663275 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:54:04 crc kubenswrapper[4727]: I0109 11:54:04.663829 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:54:04 crc kubenswrapper[4727]: I0109 11:54:04.880001 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:54:14 crc kubenswrapper[4727]: I0109 11:54:14.709656 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4tm96" Jan 09 11:54:14 crc kubenswrapper[4727]: I0109 11:54:14.784124 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tm96"] Jan 09 11:54:14 crc kubenswrapper[4727]: I0109 11:54:14.841212 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 11:54:14 crc kubenswrapper[4727]: I0109 11:54:14.841555 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-962zg" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="registry-server" containerID="cri-o://33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd" gracePeriod=2 Jan 09 11:54:15 crc kubenswrapper[4727]: I0109 11:54:15.343461 4727 generic.go:334] "Generic (PLEG): container finished" podID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerID="33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd" exitCode=0 Jan 09 11:54:15 crc kubenswrapper[4727]: I0109 11:54:15.343540 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerDied","Data":"33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd"} Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.216689 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-962zg" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.330981 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") pod \"ef9e8739-e51d-4fa8-9970-ce63af133d20\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.331277 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") pod \"ef9e8739-e51d-4fa8-9970-ce63af133d20\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.331392 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") pod \"ef9e8739-e51d-4fa8-9970-ce63af133d20\" (UID: \"ef9e8739-e51d-4fa8-9970-ce63af133d20\") " Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.332954 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities" (OuterVolumeSpecName: "utilities") pod "ef9e8739-e51d-4fa8-9970-ce63af133d20" (UID: "ef9e8739-e51d-4fa8-9970-ce63af133d20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.342643 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v" (OuterVolumeSpecName: "kube-api-access-tdx5v") pod "ef9e8739-e51d-4fa8-9970-ce63af133d20" (UID: "ef9e8739-e51d-4fa8-9970-ce63af133d20"). InnerVolumeSpecName "kube-api-access-tdx5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.359672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-962zg" event={"ID":"ef9e8739-e51d-4fa8-9970-ce63af133d20","Type":"ContainerDied","Data":"930189ee498333983e08c7ab2e58382299db3fb83cb58d6430015969c8cef074"} Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.359738 4727 scope.go:117] "RemoveContainer" containerID="33fa28277d30a2f03080a57426877e49f61fa878bdb9d5d398092afaef585fdd" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.359983 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-962zg" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.407413 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef9e8739-e51d-4fa8-9970-ce63af133d20" (UID: "ef9e8739-e51d-4fa8-9970-ce63af133d20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.425485 4727 scope.go:117] "RemoveContainer" containerID="5b01b39fbd490da0f09809ecc3d21cd8257e6278377041de1543e2204dfa1946" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.433959 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdx5v\" (UniqueName: \"kubernetes.io/projected/ef9e8739-e51d-4fa8-9970-ce63af133d20-kube-api-access-tdx5v\") on node \"crc\" DevicePath \"\"" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.433998 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.434016 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef9e8739-e51d-4fa8-9970-ce63af133d20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.455838 4727 scope.go:117] "RemoveContainer" containerID="bf159a57ad831d29f382ffa97b36634879c00d9cea9b38064632f3c6da0f08f3" Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.697215 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.705313 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-962zg"] Jan 09 11:54:16 crc kubenswrapper[4727]: I0109 11:54:16.873597 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" path="/var/lib/kubelet/pods/ef9e8739-e51d-4fa8-9970-ce63af133d20/volumes" Jan 09 11:54:39 crc kubenswrapper[4727]: I0109 11:54:39.405857 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:54:39 crc kubenswrapper[4727]: I0109 11:54:39.406776 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:55:09 crc kubenswrapper[4727]: I0109 11:55:09.404688 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:55:09 crc kubenswrapper[4727]: I0109 11:55:09.405580 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.405046 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.405812 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.405874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.406868 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:55:39 crc kubenswrapper[4727]: I0109 11:55:39.406927 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f" gracePeriod=600 Jan 09 11:55:40 crc kubenswrapper[4727]: I0109 11:55:40.253658 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f" exitCode=0 Jan 09 11:55:40 crc kubenswrapper[4727]: I0109 11:55:40.254689 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f"} Jan 09 11:55:40 crc kubenswrapper[4727]: I0109 11:55:40.254739 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814"} Jan 09 11:55:40 crc kubenswrapper[4727]: I0109 11:55:40.254757 4727 scope.go:117] "RemoveContainer" containerID="760ec92d96e220c20812741cd34db3eaa70178e7e609e7ec5a0c098f73f35496" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.453524 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 11:56:28 crc kubenswrapper[4727]: E0109 11:56:28.455018 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="extract-utilities" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.455037 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="extract-utilities" Jan 09 11:56:28 crc kubenswrapper[4727]: E0109 11:56:28.455080 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="extract-content" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.455089 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="extract-content" Jan 09 11:56:28 crc kubenswrapper[4727]: E0109 11:56:28.455104 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="registry-server" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.455113 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="registry-server" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.455345 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9e8739-e51d-4fa8-9970-ce63af133d20" containerName="registry-server" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.459653 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.462851 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z2dx8"/"default-dockercfg-sx8dq" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.463269 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z2dx8"/"openshift-service-ca.crt" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.468304 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z2dx8"/"kube-root-ca.crt" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.480584 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.511920 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.512293 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.614904 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.615041 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:28 crc kubenswrapper[4727]: I0109 11:56:28.615693 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:29 crc kubenswrapper[4727]: I0109 11:56:29.038272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") pod \"must-gather-pnnsk\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:29 crc kubenswrapper[4727]: I0109 11:56:29.086584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 11:56:29 crc kubenswrapper[4727]: I0109 11:56:29.536913 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 11:56:29 crc kubenswrapper[4727]: W0109 11:56:29.542311 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6406f2a3_a4e6_4379_a2a6_adcc1eb952fa.slice/crio-a598d5ec570e0909bb130347aa3b177731fc41b064d5fc6f499efedc3e4093f1 WatchSource:0}: Error finding container a598d5ec570e0909bb130347aa3b177731fc41b064d5fc6f499efedc3e4093f1: Status 404 returned error can't find the container with id a598d5ec570e0909bb130347aa3b177731fc41b064d5fc6f499efedc3e4093f1 Jan 09 11:56:29 crc kubenswrapper[4727]: I0109 11:56:29.751055 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" event={"ID":"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa","Type":"ContainerStarted","Data":"a598d5ec570e0909bb130347aa3b177731fc41b064d5fc6f499efedc3e4093f1"} Jan 09 11:56:30 crc kubenswrapper[4727]: I0109 11:56:30.763955 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" event={"ID":"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa","Type":"ContainerStarted","Data":"ba3fec2faa6d34d88b2c0ab138a91ee7a89e044844462fc1ed9ddd8ff5e29edf"} Jan 09 11:56:30 crc kubenswrapper[4727]: I0109 11:56:30.764469 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" event={"ID":"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa","Type":"ContainerStarted","Data":"97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4"} Jan 09 11:56:30 crc kubenswrapper[4727]: I0109 11:56:30.794406 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" podStartSLOduration=2.794375793 podStartE2EDuration="2.794375793s" podCreationTimestamp="2026-01-09 11:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:56:30.787866977 +0000 UTC m=+4236.237771768" watchObservedRunningTime="2026-01-09 11:56:30.794375793 +0000 UTC m=+4236.244280574" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.643110 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-mgdwz"] Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.645693 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.822267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.822809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.924614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.924697 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.924814 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.953473 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") pod \"crc-debug-mgdwz\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:33 crc kubenswrapper[4727]: I0109 11:56:33.981170 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:56:34 crc kubenswrapper[4727]: I0109 11:56:34.804400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" event={"ID":"22abbe2c-763b-4058-8efb-ad09eb687bc9","Type":"ContainerStarted","Data":"8eaa00e81b8c71507cd8bd7cbb7af780404b4571231b253f7cd04b4dbaf83431"} Jan 09 11:56:34 crc kubenswrapper[4727]: I0109 11:56:34.805407 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" event={"ID":"22abbe2c-763b-4058-8efb-ad09eb687bc9","Type":"ContainerStarted","Data":"017e73f51997a03f76ba5c753ba49e6ae59e3b16cc9dee47e3993b90f6c775c3"} Jan 09 11:56:34 crc kubenswrapper[4727]: I0109 11:56:34.824103 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" podStartSLOduration=1.8240797180000001 podStartE2EDuration="1.824079718s" podCreationTimestamp="2026-01-09 11:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 11:56:34.820774199 +0000 UTC m=+4240.270678990" watchObservedRunningTime="2026-01-09 11:56:34.824079718 +0000 UTC m=+4240.273984499" Jan 09 11:57:14 crc kubenswrapper[4727]: I0109 11:57:14.216990 4727 generic.go:334] "Generic (PLEG): container finished" podID="22abbe2c-763b-4058-8efb-ad09eb687bc9" containerID="8eaa00e81b8c71507cd8bd7cbb7af780404b4571231b253f7cd04b4dbaf83431" exitCode=0 Jan 09 11:57:14 crc kubenswrapper[4727]: I0109 11:57:14.217064 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" event={"ID":"22abbe2c-763b-4058-8efb-ad09eb687bc9","Type":"ContainerDied","Data":"8eaa00e81b8c71507cd8bd7cbb7af780404b4571231b253f7cd04b4dbaf83431"} Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.361226 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.400457 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-mgdwz"] Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.414587 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-mgdwz"] Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.476259 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") pod \"22abbe2c-763b-4058-8efb-ad09eb687bc9\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.476353 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") pod \"22abbe2c-763b-4058-8efb-ad09eb687bc9\" (UID: \"22abbe2c-763b-4058-8efb-ad09eb687bc9\") " Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.476486 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host" (OuterVolumeSpecName: "host") pod "22abbe2c-763b-4058-8efb-ad09eb687bc9" (UID: "22abbe2c-763b-4058-8efb-ad09eb687bc9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.477193 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22abbe2c-763b-4058-8efb-ad09eb687bc9-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.843004 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944" (OuterVolumeSpecName: "kube-api-access-vz944") pod "22abbe2c-763b-4058-8efb-ad09eb687bc9" (UID: "22abbe2c-763b-4058-8efb-ad09eb687bc9"). InnerVolumeSpecName "kube-api-access-vz944". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:57:15 crc kubenswrapper[4727]: I0109 11:57:15.887885 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz944\" (UniqueName: \"kubernetes.io/projected/22abbe2c-763b-4058-8efb-ad09eb687bc9-kube-api-access-vz944\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:16 crc kubenswrapper[4727]: I0109 11:57:16.246027 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="017e73f51997a03f76ba5c753ba49e6ae59e3b16cc9dee47e3993b90f6c775c3" Jan 09 11:57:16 crc kubenswrapper[4727]: I0109 11:57:16.246078 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-mgdwz" Jan 09 11:57:16 crc kubenswrapper[4727]: I0109 11:57:16.873452 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22abbe2c-763b-4058-8efb-ad09eb687bc9" path="/var/lib/kubelet/pods/22abbe2c-763b-4058-8efb-ad09eb687bc9/volumes" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.334535 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-9lb5p"] Jan 09 11:57:17 crc kubenswrapper[4727]: E0109 11:57:17.335844 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22abbe2c-763b-4058-8efb-ad09eb687bc9" containerName="container-00" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.335868 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="22abbe2c-763b-4058-8efb-ad09eb687bc9" containerName="container-00" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.336115 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="22abbe2c-763b-4058-8efb-ad09eb687bc9" containerName="container-00" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.337175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.423309 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.423440 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.525761 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.525959 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.525966 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.551543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") pod \"crc-debug-9lb5p\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:17 crc kubenswrapper[4727]: I0109 11:57:17.660386 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:18 crc kubenswrapper[4727]: I0109 11:57:18.265444 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" event={"ID":"3cdf248f-c28f-4031-92b0-4945708a36d5","Type":"ContainerStarted","Data":"78754fc872d09ed1a4f5c1e91dae695e35b76032e061eccc50bff1fffd35123a"} Jan 09 11:57:19 crc kubenswrapper[4727]: I0109 11:57:19.276108 4727 generic.go:334] "Generic (PLEG): container finished" podID="3cdf248f-c28f-4031-92b0-4945708a36d5" containerID="7bcd781aca45bcf3260e2bd37f7bdcf3d57df1292214141f1aa5d63a4bcad351" exitCode=0 Jan 09 11:57:19 crc kubenswrapper[4727]: I0109 11:57:19.276258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" event={"ID":"3cdf248f-c28f-4031-92b0-4945708a36d5","Type":"ContainerDied","Data":"7bcd781aca45bcf3260e2bd37f7bdcf3d57df1292214141f1aa5d63a4bcad351"} Jan 09 11:57:19 crc kubenswrapper[4727]: I0109 11:57:19.768618 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-9lb5p"] Jan 09 11:57:19 crc kubenswrapper[4727]: I0109 11:57:19.778203 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-9lb5p"] Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.398382 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.495235 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") pod \"3cdf248f-c28f-4031-92b0-4945708a36d5\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.495389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") pod \"3cdf248f-c28f-4031-92b0-4945708a36d5\" (UID: \"3cdf248f-c28f-4031-92b0-4945708a36d5\") " Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.495460 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host" (OuterVolumeSpecName: "host") pod "3cdf248f-c28f-4031-92b0-4945708a36d5" (UID: "3cdf248f-c28f-4031-92b0-4945708a36d5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.496009 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cdf248f-c28f-4031-92b0-4945708a36d5-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.502831 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2" (OuterVolumeSpecName: "kube-api-access-cwmv2") pod "3cdf248f-c28f-4031-92b0-4945708a36d5" (UID: "3cdf248f-c28f-4031-92b0-4945708a36d5"). InnerVolumeSpecName "kube-api-access-cwmv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.597216 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwmv2\" (UniqueName: \"kubernetes.io/projected/3cdf248f-c28f-4031-92b0-4945708a36d5-kube-api-access-cwmv2\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.873160 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cdf248f-c28f-4031-92b0-4945708a36d5" path="/var/lib/kubelet/pods/3cdf248f-c28f-4031-92b0-4945708a36d5/volumes" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.961256 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-csgdz"] Jan 09 11:57:20 crc kubenswrapper[4727]: E0109 11:57:20.962048 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cdf248f-c28f-4031-92b0-4945708a36d5" containerName="container-00" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.962080 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cdf248f-c28f-4031-92b0-4945708a36d5" containerName="container-00" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.962272 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cdf248f-c28f-4031-92b0-4945708a36d5" containerName="container-00" Jan 09 11:57:20 crc kubenswrapper[4727]: I0109 11:57:20.963097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.004841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.004955 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.107129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.107238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.107392 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.126031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") pod \"crc-debug-csgdz\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.282694 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.295909 4727 scope.go:117] "RemoveContainer" containerID="7bcd781aca45bcf3260e2bd37f7bdcf3d57df1292214141f1aa5d63a4bcad351" Jan 09 11:57:21 crc kubenswrapper[4727]: I0109 11:57:21.296068 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-9lb5p" Jan 09 11:57:21 crc kubenswrapper[4727]: W0109 11:57:21.319594 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8aa84c32_e586_44e1_bf65_2eca20015743.slice/crio-dc236ad6fe5e61ae2bb9915c2255bd9b5d35e94d01ca0dd295f8cea232b1de48 WatchSource:0}: Error finding container dc236ad6fe5e61ae2bb9915c2255bd9b5d35e94d01ca0dd295f8cea232b1de48: Status 404 returned error can't find the container with id dc236ad6fe5e61ae2bb9915c2255bd9b5d35e94d01ca0dd295f8cea232b1de48 Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.308892 4727 generic.go:334] "Generic (PLEG): container finished" podID="8aa84c32-e586-44e1-bf65-2eca20015743" containerID="09b9f88278a379f541541f5230c3d1e100c736600a76b575d9fb665faea3eeac" exitCode=0 Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.308992 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" event={"ID":"8aa84c32-e586-44e1-bf65-2eca20015743","Type":"ContainerDied","Data":"09b9f88278a379f541541f5230c3d1e100c736600a76b575d9fb665faea3eeac"} Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.309775 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" event={"ID":"8aa84c32-e586-44e1-bf65-2eca20015743","Type":"ContainerStarted","Data":"dc236ad6fe5e61ae2bb9915c2255bd9b5d35e94d01ca0dd295f8cea232b1de48"} Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.362113 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-csgdz"] Jan 09 11:57:22 crc kubenswrapper[4727]: I0109 11:57:22.373407 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2dx8/crc-debug-csgdz"] Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.421705 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.554754 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") pod \"8aa84c32-e586-44e1-bf65-2eca20015743\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.555178 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") pod \"8aa84c32-e586-44e1-bf65-2eca20015743\" (UID: \"8aa84c32-e586-44e1-bf65-2eca20015743\") " Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.554903 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host" (OuterVolumeSpecName: "host") pod "8aa84c32-e586-44e1-bf65-2eca20015743" (UID: "8aa84c32-e586-44e1-bf65-2eca20015743"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.556317 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aa84c32-e586-44e1-bf65-2eca20015743-host\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.564491 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw" (OuterVolumeSpecName: "kube-api-access-btjtw") pod "8aa84c32-e586-44e1-bf65-2eca20015743" (UID: "8aa84c32-e586-44e1-bf65-2eca20015743"). InnerVolumeSpecName "kube-api-access-btjtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:57:23 crc kubenswrapper[4727]: I0109 11:57:23.658672 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btjtw\" (UniqueName: \"kubernetes.io/projected/8aa84c32-e586-44e1-bf65-2eca20015743-kube-api-access-btjtw\") on node \"crc\" DevicePath \"\"" Jan 09 11:57:24 crc kubenswrapper[4727]: I0109 11:57:24.330711 4727 scope.go:117] "RemoveContainer" containerID="09b9f88278a379f541541f5230c3d1e100c736600a76b575d9fb665faea3eeac" Jan 09 11:57:24 crc kubenswrapper[4727]: I0109 11:57:24.331209 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/crc-debug-csgdz" Jan 09 11:57:24 crc kubenswrapper[4727]: I0109 11:57:24.881250 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aa84c32-e586-44e1-bf65-2eca20015743" path="/var/lib/kubelet/pods/8aa84c32-e586-44e1-bf65-2eca20015743/volumes" Jan 09 11:57:39 crc kubenswrapper[4727]: I0109 11:57:39.404789 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:57:39 crc kubenswrapper[4727]: I0109 11:57:39.405902 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:57:49 crc kubenswrapper[4727]: I0109 11:57:49.681794 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5456d7bfcd-5bs8c_fef4869f-d107-4f5b-a136-166de8ac7a69/barbican-api/0.log" Jan 09 11:57:49 crc kubenswrapper[4727]: I0109 11:57:49.885360 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5456d7bfcd-5bs8c_fef4869f-d107-4f5b-a136-166de8ac7a69/barbican-api-log/0.log" Jan 09 11:57:49 crc kubenswrapper[4727]: I0109 11:57:49.888745 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d89df6ff4-gzcbx_b166264d-8575-47af-88f1-c569c71c84f1/barbican-keystone-listener/0.log" Jan 09 11:57:49 crc kubenswrapper[4727]: I0109 11:57:49.922349 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-d89df6ff4-gzcbx_b166264d-8575-47af-88f1-c569c71c84f1/barbican-keystone-listener-log/0.log" Jan 09 11:57:50 crc kubenswrapper[4727]: I0109 11:57:50.628475 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76fd5dd86c-tmlx2_97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8/barbican-worker/0.log" Jan 09 11:57:50 crc kubenswrapper[4727]: I0109 11:57:50.642956 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76fd5dd86c-tmlx2_97d7fe9d-0736-42a7-99bc-99f9f8b5f2c8/barbican-worker-log/0.log" Jan 09 11:57:50 crc kubenswrapper[4727]: I0109 11:57:50.871267 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vqfnc_23e25abc-b16a-4273-846e-7fab7ef1a095/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:50 crc kubenswrapper[4727]: I0109 11:57:50.956615 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/ceilometer-central-agent/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.014798 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/ceilometer-notification-agent/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.135558 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/proxy-httpd/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.135844 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bc762f8b-1dba-4c4a-bec8-30c9d5b27c24/sg-core/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.279483 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a36e4825-82aa-4263-a757-807b3c43d2fa/cinder-api/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.470812 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a36e4825-82aa-4263-a757-807b3c43d2fa/cinder-api-log/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.590663 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e69c5def-7abe-4486-b548-323e0416cc83/cinder-scheduler/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.636250 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e69c5def-7abe-4486-b548-323e0416cc83/probe/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.750891 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-x2djn_f1169cca-13ce-4a18-8901-faa73fc5b913/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:51 crc kubenswrapper[4727]: I0109 11:57:51.920207 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2l88s_fc6114d6-7052-46b3-a8e5-c8b9731cc92c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.238808 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/init/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.416594 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/init/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.492926 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-j4b5d_95c81071-440f-4823-8240-dfd215cdf314/dnsmasq-dns/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.531495 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jh9dz_79cfc519-9725-4957-b42c-d262651895a3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.719903 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a/glance-httpd/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.770620 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cc6d55eb-2432-42c5-80c3-ac9e1fb76f6a/glance-log/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.913342 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_992ca8ba-ec96-4dc0-9442-464cbdce8afc/glance-httpd/0.log" Jan 09 11:57:52 crc kubenswrapper[4727]: I0109 11:57:52.948883 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_992ca8ba-ec96-4dc0-9442-464cbdce8afc/glance-log/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.167147 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57c89666d8-8fhd6_89031be7-ef50-45c8-b43f-b34f66012f21/horizon/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.329530 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qplw9_a4f9d22c-83b0-4c0c-95e3-a2b2937908db/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.590887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qs4rr_e3f49f82-8192-4a6a-81ff-b6e5f6a3f4ea/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.619926 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57c89666d8-8fhd6_89031be7-ef50-45c8-b43f-b34f66012f21/horizon-log/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.812388 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-666857844b-c2hp6_3738e7aa-d182-43a0-962c-b735526851f2/keystone-api/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.865896 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bd1fb5d2-cc3d-43df-9b11-cf4e197bb8b3/kube-state-metrics/0.log" Jan 09 11:57:53 crc kubenswrapper[4727]: I0109 11:57:53.911748 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-zs24v_a56270d2-f80b-4dda-a64c-fe39d4b4a9e5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:54 crc kubenswrapper[4727]: I0109 11:57:54.222063 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8db497957-k8d9r_434346b3-08dc-43a6-aed9-3c00672c0c35/neutron-httpd/0.log" Jan 09 11:57:54 crc kubenswrapper[4727]: I0109 11:57:54.357592 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8db497957-k8d9r_434346b3-08dc-43a6-aed9-3c00672c0c35/neutron-api/0.log" Jan 09 11:57:54 crc kubenswrapper[4727]: I0109 11:57:54.425353 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5z82_92bbfcf1-befd-42df-a532-97f9a3bd22d0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.121502 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7bfcd192-734d-4709-b2c3-9abafc15a30e/nova-api-log/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.169670 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3aab78e7-6f64-4c9e-bb37-f670092f06eb/nova-cell0-conductor-conductor/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.479747 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6a601271-3d79-4446-bc6f-81b4490541f4/nova-cell1-conductor-conductor/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.558746 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7bfcd192-734d-4709-b2c3-9abafc15a30e/nova-api-api/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.627449 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7275705c-d408-4eb4-af28-b9b51403b913/nova-cell1-novncproxy-novncproxy/0.log" Jan 09 11:57:55 crc kubenswrapper[4727]: I0109 11:57:55.951713 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-s9spc_291b6783-3c71-4449-b696-27c7c340c41a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.104205 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c6024d35-671e-4814-9c13-de9897a984ee/nova-metadata-log/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.582918 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/mysql-bootstrap/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.752004 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1203f055-468b-48e1-b859-78a4d11d5034/nova-scheduler-scheduler/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.863733 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/galera/0.log" Jan 09 11:57:56 crc kubenswrapper[4727]: I0109 11:57:56.885390 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e90a87ab-2df7-4a4a-8854-6daf3322e3d1/mysql-bootstrap/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.153545 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/mysql-bootstrap/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.372240 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/mysql-bootstrap/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.398719 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_398bfc2d-be02-491c-af23-69fc4fc24817/galera/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.583854 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_06c8d5e8-c424-4b08-98a2-8e89fa5a27b4/openstackclient/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.705942 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p58fw_ede60be2-7d1e-482a-b994-6c552d322575/openstack-network-exporter/0.log" Jan 09 11:57:57 crc kubenswrapper[4727]: I0109 11:57:57.930623 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mwrp2_d81594ff-04f5-47c2-9620-db583609e9aa/ovn-controller/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.126415 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c6024d35-671e-4814-9c13-de9897a984ee/nova-metadata-metadata/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.138475 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server-init/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.412917 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovs-vswitchd/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.445785 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server-init/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.449903 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wxljq_bdf6d307-98f2-40a7-8b6c-c149789150ef/ovsdb-server/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.685651 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5504697e-8969-45f2-92c6-3aba8688de1a/openstack-network-exporter/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.734144 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rhzcm_5ebde73e-573e-4b52-b779-dd3cd03761e0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.758864 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5504697e-8969-45f2-92c6-3aba8688de1a/ovn-northd/0.log" Jan 09 11:57:58 crc kubenswrapper[4727]: I0109 11:57:58.992247 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e25e0da-05c1-4d2e-8e27-c795be192a77/openstack-network-exporter/0.log" Jan 09 11:57:59 crc kubenswrapper[4727]: I0109 11:57:59.074501 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e25e0da-05c1-4d2e-8e27-c795be192a77/ovsdbserver-nb/0.log" Jan 09 11:57:59 crc kubenswrapper[4727]: I0109 11:57:59.258821 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8/openstack-network-exporter/0.log" Jan 09 11:57:59 crc kubenswrapper[4727]: I0109 11:57:59.339994 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4a92393f-3fc8-4570-9e2f-b3aed9ce9bb8/ovsdbserver-sb/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.017691 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/setup-container/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.051536 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c4f6b76d-7zrx8_f588c09f-34b7-4bf1-89f2-0f967cf6ddd6/placement-api/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.147985 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85c4f6b76d-7zrx8_f588c09f-34b7-4bf1-89f2-0f967cf6ddd6/placement-log/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.343498 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/rabbitmq/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.348748 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a49793da-9c08-47ea-892e-fe9e5b16d309/setup-container/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.408135 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/setup-container/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.745583 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/setup-container/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.788706 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2tlxd_72a53995-d5d0-4795-a1c7-f8a570a0ff6a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:00 crc kubenswrapper[4727]: I0109 11:58:00.797929 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcf1c8d7-2c22-41a5-a1fc-64e9c35bacb9/rabbitmq/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.003300 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4zggm_ce764242-0f23-4580-87ee-9f0f2f81fb0e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.153078 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lwxvv_d9bcc7e6-29a0-4902-a4be-2ea8e0a1f1a1/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.327117 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-27qwg_6f717d58-9e42-4359-89e8-70a60345d546/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.440394 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9n6wb_247ff33e-a764-4e75-9d54-2c45ae8d8ca7/ssh-known-hosts-edpm-deployment/0.log" Jan 09 11:58:01 crc kubenswrapper[4727]: I0109 11:58:01.793169 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67d6487995-f424z_f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb/proxy-httpd/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.186016 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67d6487995-f424z_f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb/proxy-server/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.288524 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-t2qwp_5a7df215-53c5-4771-95de-9af59255b3de/swift-ring-rebalance/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.450281 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-auditor/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.557152 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-reaper/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.576041 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-replicator/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.616309 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/account-server/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.671414 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-auditor/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.835024 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-server/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.852579 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-updater/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.903857 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-auditor/0.log" Jan 09 11:58:02 crc kubenswrapper[4727]: I0109 11:58:02.939616 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/container-replicator/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.090074 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-replicator/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.090390 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-expirer/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.229117 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-server/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.397686 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/object-updater/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.475411 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/rsync/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.506132 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b71205e9-ee26-48fb-aeeb-58eaee9ac9cf/swift-recon-cron/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.709952 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j6bs5_2d4033a7-e7a4-495b-bbb9-63e8ae1189bc/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.750609 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_52cc8f55-78e0-4bbe-bd10-b7e08fbb2a1e/tempest-tests-tempest-tests-runner/0.log" Jan 09 11:58:03 crc kubenswrapper[4727]: I0109 11:58:03.946937 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_65b47f8e-eab5-4015-9926-36dcf8a8a1f0/test-operator-logs-container/0.log" Jan 09 11:58:04 crc kubenswrapper[4727]: I0109 11:58:04.086472 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-m4njz_6811cbf2-94eb-44a0-ae3e-8f0e35163df5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 09 11:58:09 crc kubenswrapper[4727]: I0109 11:58:09.404562 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:58:09 crc kubenswrapper[4727]: I0109 11:58:09.405423 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:58:13 crc kubenswrapper[4727]: I0109 11:58:13.506968 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0e6e8606-58f3-4640-939b-afa25ce1ce03/memcached/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.309102 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-nd7lx_f57a8b19-1f94-4cc4-af28-f7c506f93de5/manager/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.457903 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-l25ck_63639485-2ddb-4983-921a-9de5dda98f0f/manager/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.572723 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-l4fld_e8c91cda-4264-401f-83de-20ddcf5f0d4d/manager/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.652723 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.828976 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.885259 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:58:33 crc kubenswrapper[4727]: I0109 11:58:33.900553 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.015242 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/util/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.039886 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/pull/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.070602 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e5bdd901a4d2823b2bc03af02548c50f5d1f97c53d6f6d6477de47e726njksm_7624e855-2440-4a5a-8905-5e4e7c76a36c/extract/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.234819 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-s49vr_9891b17e-81f9-4999-b489-db3e162c2a54/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.321726 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-w5c7d_9e494b5d-8aeb-47ed-b0a6-5e83b7f58bf6/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.467358 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-nxc7n_51db22df-3d25-4c12-b104-eb3848940958/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.673784 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-g5ckd_e4480343-1920-4926-8668-e47e5bbfb646/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.773094 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-qpmcd_24886819-7c1f-4b1f-880e-4b2102e302c1/manager/0.log" Jan 09 11:58:34 crc kubenswrapper[4727]: I0109 11:58:34.896814 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-4nzmw_6040cced-684e-4521-9c4e-1debba9d5320/manager/0.log" Jan 09 11:58:35 crc kubenswrapper[4727]: I0109 11:58:35.591470 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-6gtz5_ddfee9e4-1084-4750-ab19-473dde7a2fb6/manager/0.log" Jan 09 11:58:35 crc kubenswrapper[4727]: I0109 11:58:35.671152 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-4dv6h_e604d4a1-bf95-49df-a854-b15337b7fae7/manager/0.log" Jan 09 11:58:35 crc kubenswrapper[4727]: I0109 11:58:35.881265 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-q8wx7_848b9588-10d2-4bd4-bcc0-cccd55334c85/manager/0.log" Jan 09 11:58:35 crc kubenswrapper[4727]: I0109 11:58:35.971097 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-69kx5_9625f9ce-45bc-4ac9-ba7a-dbfb4275fecb/manager/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.086113 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-pnk72_fab7e320-c116-4603-9aac-2e310be1b209/manager/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.172365 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7dn9lh_3550e1cd-642e-481c-b98f-b6d3770f51ca/manager/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.564780 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-75c59d454f-d829c_f749f148-ae4b-475b-90d9-1028d134d57c/operator/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.642293 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cj5kr_26bfbd30-40a2-466a-862d-6cdf25911f85/registry-server/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.900910 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-gkkm4_558e9c8f-57c8-4cd6-a8ef-1551c2c56fe6/manager/0.log" Jan 09 11:58:36 crc kubenswrapper[4727]: I0109 11:58:36.988697 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-cc8k9_15c1d49b-c086-4c30-9a99-e0fb597dd76f/manager/0.log" Jan 09 11:58:37 crc kubenswrapper[4727]: I0109 11:58:37.206219 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2m6mz_ee5399a2-4352-4013-9c26-a40e4bc815e3/operator/0.log" Jan 09 11:58:37 crc kubenswrapper[4727]: I0109 11:58:37.622240 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7db9fd4464-5h9ft_6a33b307-e521-43c4-8e35-3e9d7d553716/manager/0.log" Jan 09 11:58:37 crc kubenswrapper[4727]: I0109 11:58:37.808389 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-x4r9z_c371fa9c-dd02-4673-99aa-4ec8fa8d9e07/manager/0.log" Jan 09 11:58:37 crc kubenswrapper[4727]: I0109 11:58:37.849059 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-vgcgj_ba0be6cc-1e31-4421-aa33-1e2514069376/manager/0.log" Jan 09 11:58:38 crc kubenswrapper[4727]: I0109 11:58:38.000143 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-m8s9d_e3f94965-fce3-4e35-9f97-5047e05dd50a/manager/0.log" Jan 09 11:58:38 crc kubenswrapper[4727]: I0109 11:58:38.037076 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-jvkn5_9300f2a9-97a8-4868-9485-8dd5d51df39e/manager/0.log" Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.405392 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.405936 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.406004 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.406996 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 11:58:39 crc kubenswrapper[4727]: I0109 11:58:39.407070 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" gracePeriod=600 Jan 09 11:58:39 crc kubenswrapper[4727]: E0109 11:58:39.607639 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:58:40 crc kubenswrapper[4727]: I0109 11:58:40.167466 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" exitCode=0 Jan 09 11:58:40 crc kubenswrapper[4727]: I0109 11:58:40.167579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814"} Jan 09 11:58:40 crc kubenswrapper[4727]: I0109 11:58:40.168134 4727 scope.go:117] "RemoveContainer" containerID="cb5698ae4a9cec25912d8da8a34ee6fc1be0f8538e1e712bfb12c03e538af39f" Jan 09 11:58:40 crc kubenswrapper[4727]: I0109 11:58:40.169113 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:58:40 crc kubenswrapper[4727]: E0109 11:58:40.169419 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:58:51 crc kubenswrapper[4727]: I0109 11:58:51.860575 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:58:51 crc kubenswrapper[4727]: E0109 11:58:51.861578 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:58:58 crc kubenswrapper[4727]: I0109 11:58:58.532017 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w6pvx_879d1222-addb-406a-b8fd-3ce4068c1d08/control-plane-machine-set-operator/0.log" Jan 09 11:58:58 crc kubenswrapper[4727]: I0109 11:58:58.732195 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9b2sc_ff5b64d7-46ec-4f56-a044-4b57c96ebc03/kube-rbac-proxy/0.log" Jan 09 11:58:58 crc kubenswrapper[4727]: I0109 11:58:58.736433 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9b2sc_ff5b64d7-46ec-4f56-a044-4b57c96ebc03/machine-api-operator/0.log" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.315597 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:02 crc kubenswrapper[4727]: E0109 11:59:02.316758 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa84c32-e586-44e1-bf65-2eca20015743" containerName="container-00" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.316775 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa84c32-e586-44e1-bf65-2eca20015743" containerName="container-00" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.317022 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aa84c32-e586-44e1-bf65-2eca20015743" containerName="container-00" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.319275 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.331725 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.459919 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.460514 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.460592 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.562914 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.563038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.563314 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.563791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.564067 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.585715 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") pod \"community-operators-cgctn\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:02 crc kubenswrapper[4727]: I0109 11:59:02.644608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:03 crc kubenswrapper[4727]: I0109 11:59:03.259170 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:03 crc kubenswrapper[4727]: I0109 11:59:03.860599 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:03 crc kubenswrapper[4727]: E0109 11:59:03.862663 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:04 crc kubenswrapper[4727]: I0109 11:59:04.427837 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerID="f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95" exitCode=0 Jan 09 11:59:04 crc kubenswrapper[4727]: I0109 11:59:04.427976 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerDied","Data":"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95"} Jan 09 11:59:04 crc kubenswrapper[4727]: I0109 11:59:04.428321 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerStarted","Data":"11487aafdd4e5c000ef83e33c4fcf09588b392155e2980f91294cde1216b9bbc"} Jan 09 11:59:04 crc kubenswrapper[4727]: I0109 11:59:04.430933 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 11:59:05 crc kubenswrapper[4727]: I0109 11:59:05.443275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerStarted","Data":"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b"} Jan 09 11:59:06 crc kubenswrapper[4727]: I0109 11:59:06.459094 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerID="7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b" exitCode=0 Jan 09 11:59:06 crc kubenswrapper[4727]: I0109 11:59:06.459175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerDied","Data":"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b"} Jan 09 11:59:07 crc kubenswrapper[4727]: I0109 11:59:07.490559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerStarted","Data":"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad"} Jan 09 11:59:07 crc kubenswrapper[4727]: I0109 11:59:07.516869 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgctn" podStartSLOduration=3.067996176 podStartE2EDuration="5.516849011s" podCreationTimestamp="2026-01-09 11:59:02 +0000 UTC" firstStartedPulling="2026-01-09 11:59:04.430640518 +0000 UTC m=+4389.880545299" lastFinishedPulling="2026-01-09 11:59:06.879493353 +0000 UTC m=+4392.329398134" observedRunningTime="2026-01-09 11:59:07.51168168 +0000 UTC m=+4392.961586471" watchObservedRunningTime="2026-01-09 11:59:07.516849011 +0000 UTC m=+4392.966753792" Jan 09 11:59:12 crc kubenswrapper[4727]: I0109 11:59:12.645055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:12 crc kubenswrapper[4727]: I0109 11:59:12.646920 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:12 crc kubenswrapper[4727]: I0109 11:59:12.704769 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:13 crc kubenswrapper[4727]: I0109 11:59:13.611559 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:13 crc kubenswrapper[4727]: I0109 11:59:13.679092 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:15 crc kubenswrapper[4727]: I0109 11:59:15.011669 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2qqks_2715d39f-d488-448b-b6f2-ff592dea195a/cert-manager-controller/0.log" Jan 09 11:59:15 crc kubenswrapper[4727]: I0109 11:59:15.217720 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cbsgr_3a45eda8-4151-4b6c-b0f2-ab6416dc34e9/cert-manager-cainjector/0.log" Jan 09 11:59:15 crc kubenswrapper[4727]: I0109 11:59:15.281869 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qlfjg_5cee0bf6-27dd-4944-bbef-574afbae1542/cert-manager-webhook/0.log" Jan 09 11:59:15 crc kubenswrapper[4727]: I0109 11:59:15.577622 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgctn" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="registry-server" containerID="cri-o://01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" gracePeriod=2 Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.328297 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.404098 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") pod \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.404673 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") pod \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.404721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") pod \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\" (UID: \"b5aa136f-618b-42f3-b1ad-97199b0fb4f7\") " Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.405736 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities" (OuterVolumeSpecName: "utilities") pod "b5aa136f-618b-42f3-b1ad-97199b0fb4f7" (UID: "b5aa136f-618b-42f3-b1ad-97199b0fb4f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.411129 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8" (OuterVolumeSpecName: "kube-api-access-nkrd8") pod "b5aa136f-618b-42f3-b1ad-97199b0fb4f7" (UID: "b5aa136f-618b-42f3-b1ad-97199b0fb4f7"). InnerVolumeSpecName "kube-api-access-nkrd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.465208 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5aa136f-618b-42f3-b1ad-97199b0fb4f7" (UID: "b5aa136f-618b-42f3-b1ad-97199b0fb4f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.506843 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrd8\" (UniqueName: \"kubernetes.io/projected/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-kube-api-access-nkrd8\") on node \"crc\" DevicePath \"\"" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.506889 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.506903 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5aa136f-618b-42f3-b1ad-97199b0fb4f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590470 4727 generic.go:334] "Generic (PLEG): container finished" podID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerID="01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" exitCode=0 Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590540 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerDied","Data":"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad"} Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgctn" event={"ID":"b5aa136f-618b-42f3-b1ad-97199b0fb4f7","Type":"ContainerDied","Data":"11487aafdd4e5c000ef83e33c4fcf09588b392155e2980f91294cde1216b9bbc"} Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590598 4727 scope.go:117] "RemoveContainer" containerID="01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.590757 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgctn" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.632751 4727 scope.go:117] "RemoveContainer" containerID="7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.647650 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.659669 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgctn"] Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.663719 4727 scope.go:117] "RemoveContainer" containerID="f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.700405 4727 scope.go:117] "RemoveContainer" containerID="01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" Jan 09 11:59:16 crc kubenswrapper[4727]: E0109 11:59:16.702942 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad\": container with ID starting with 01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad not found: ID does not exist" containerID="01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.703010 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad"} err="failed to get container status \"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad\": rpc error: code = NotFound desc = could not find container \"01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad\": container with ID starting with 01322179945bd777c3c461d410bb0b7035d5829ae5eecb7d5c5dc127ee7802ad not found: ID does not exist" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.703047 4727 scope.go:117] "RemoveContainer" containerID="7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b" Jan 09 11:59:16 crc kubenswrapper[4727]: E0109 11:59:16.704502 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b\": container with ID starting with 7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b not found: ID does not exist" containerID="7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.704589 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b"} err="failed to get container status \"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b\": rpc error: code = NotFound desc = could not find container \"7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b\": container with ID starting with 7baec7a21a29085eb6fd3290f3f638a9f732db10202a7075a63b97ede2515a4b not found: ID does not exist" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.704622 4727 scope.go:117] "RemoveContainer" containerID="f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95" Jan 09 11:59:16 crc kubenswrapper[4727]: E0109 11:59:16.704953 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95\": container with ID starting with f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95 not found: ID does not exist" containerID="f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.704968 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95"} err="failed to get container status \"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95\": rpc error: code = NotFound desc = could not find container \"f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95\": container with ID starting with f0439ce15efa2a56549e0d3e188d47d2f1c1c92a960ccc9733d3e598c78fae95 not found: ID does not exist" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.896623 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:16 crc kubenswrapper[4727]: E0109 11:59:16.896988 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:16 crc kubenswrapper[4727]: I0109 11:59:16.912854 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" path="/var/lib/kubelet/pods/b5aa136f-618b-42f3-b1ad-97199b0fb4f7/volumes" Jan 09 11:59:28 crc kubenswrapper[4727]: I0109 11:59:28.861210 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:28 crc kubenswrapper[4727]: E0109 11:59:28.862137 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.173480 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-6dwzn_9721a7da-2c8a-4a0d-ac56-8b4b11c028cd/nmstate-console-plugin/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.387224 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4757d_673fefde-8c1b-46fe-a88a-00b3fa962a3e/nmstate-handler/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.485221 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-txtbd_0683f840-0540-443e-8f9d-123b701acbd7/kube-rbac-proxy/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.527458 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-txtbd_0683f840-0540-443e-8f9d-123b701acbd7/nmstate-metrics/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.701279 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-p86wv_b4c7550e-1eaa-4e85-b44d-c752f6e37955/nmstate-operator/0.log" Jan 09 11:59:30 crc kubenswrapper[4727]: I0109 11:59:30.760138 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-5lc88_7b8d8f1f-d4d5-4716-818f-6f5bbf6a2dac/nmstate-webhook/0.log" Jan 09 11:59:39 crc kubenswrapper[4727]: I0109 11:59:39.860552 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:39 crc kubenswrapper[4727]: E0109 11:59:39.861649 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:53 crc kubenswrapper[4727]: I0109 11:59:53.860970 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 11:59:53 crc kubenswrapper[4727]: E0109 11:59:53.862323 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 11:59:56 crc kubenswrapper[4727]: I0109 11:59:56.257009 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-67d6487995-f424z" podUID="f6d5b74a-ef5f-4cb2-b043-e56bb3cbfcdb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.202580 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8"] Jan 09 12:00:00 crc kubenswrapper[4727]: E0109 12:00:00.203972 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="registry-server" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.203986 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="registry-server" Jan 09 12:00:00 crc kubenswrapper[4727]: E0109 12:00:00.203998 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="extract-utilities" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.204004 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="extract-utilities" Jan 09 12:00:00 crc kubenswrapper[4727]: E0109 12:00:00.204032 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="extract-content" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.204038 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="extract-content" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.204304 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5aa136f-618b-42f3-b1ad-97199b0fb4f7" containerName="registry-server" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.205080 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.208128 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.208417 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.213270 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8"] Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.332641 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.333135 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.333224 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.435281 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.435502 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.435574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.436624 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.454033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.458916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") pod \"collect-profiles-29466000-kl8x8\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:00 crc kubenswrapper[4727]: I0109 12:00:00.526116 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:01 crc kubenswrapper[4727]: I0109 12:00:01.026799 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8"] Jan 09 12:00:02 crc kubenswrapper[4727]: I0109 12:00:02.023655 4727 generic.go:334] "Generic (PLEG): container finished" podID="481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" containerID="7a8da01c55d77faff9fdb244545f165bdc12ccfae94df90194b9a9fbeed83e23" exitCode=0 Jan 09 12:00:02 crc kubenswrapper[4727]: I0109 12:00:02.023733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" event={"ID":"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa","Type":"ContainerDied","Data":"7a8da01c55d77faff9fdb244545f165bdc12ccfae94df90194b9a9fbeed83e23"} Jan 09 12:00:02 crc kubenswrapper[4727]: I0109 12:00:02.024082 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" event={"ID":"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa","Type":"ContainerStarted","Data":"2eb18636008350c6a890c3311d9b2fc9275f267bdb200d76bf2377928fd85240"} Jan 09 12:00:03 crc kubenswrapper[4727]: I0109 12:00:03.428899 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-ljds2_da86c323-c171-499f-8e25-74532f7c1fca/kube-rbac-proxy/0.log" Jan 09 12:00:03 crc kubenswrapper[4727]: I0109 12:00:03.577474 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-ljds2_da86c323-c171-499f-8e25-74532f7c1fca/controller/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.036657 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.044161 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" event={"ID":"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa","Type":"ContainerDied","Data":"2eb18636008350c6a890c3311d9b2fc9275f267bdb200d76bf2377928fd85240"} Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.044216 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29466000-kl8x8" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.044214 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb18636008350c6a890c3311d9b2fc9275f267bdb200d76bf2377928fd85240" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.130052 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") pod \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.130160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") pod \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.130404 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") pod \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\" (UID: \"481d9d2d-4b03-4fb1-98a3-f861f7fd5caa\") " Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.131418 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume" (OuterVolumeSpecName: "config-volume") pod "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" (UID: "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.140336 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-6msbv_ca5ae287-2206-4f7d-8fdc-eeafd7fd01ee/frr-k8s-webhook-server/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.219563 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.233201 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.235126 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" (UID: "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.235330 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc" (OuterVolumeSpecName: "kube-api-access-rj5tc") pod "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" (UID: "481d9d2d-4b03-4fb1-98a3-f861f7fd5caa"). InnerVolumeSpecName "kube-api-access-rj5tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.335436 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.335483 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj5tc\" (UniqueName: \"kubernetes.io/projected/481d9d2d-4b03-4fb1-98a3-f861f7fd5caa-kube-api-access-rj5tc\") on node \"crc\" DevicePath \"\"" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.472840 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.472923 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.491445 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.508075 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.729679 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.734088 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.779081 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.798887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.867336 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:04 crc kubenswrapper[4727]: E0109 12:00:04.867807 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:04 crc kubenswrapper[4727]: I0109 12:00:04.975977 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-frr-files/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.013080 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-metrics/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.054195 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/cp-reloader/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.108827 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/controller/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.146951 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.170784 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465955-d2jgp"] Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.249750 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/frr-metrics/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.293711 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/kube-rbac-proxy/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.339689 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/kube-rbac-proxy-frr/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.483256 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/reloader/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.592023 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7fc8994bc9-qg228_d7eb33c1-26fc-47be-8c5b-f235afa77ea8/manager/0.log" Jan 09 12:00:05 crc kubenswrapper[4727]: I0109 12:00:05.863227 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c5db45976-lnrnz_d3f738e6-a0bc-42cd-b4d8-71940837e09f/webhook-server/0.log" Jan 09 12:00:06 crc kubenswrapper[4727]: I0109 12:00:06.047752 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ls2r2_8ffb75e8-9dff-48d1-952b-a07637adfceb/kube-rbac-proxy/0.log" Jan 09 12:00:06 crc kubenswrapper[4727]: I0109 12:00:06.639830 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ls2r2_8ffb75e8-9dff-48d1-952b-a07637adfceb/speaker/0.log" Jan 09 12:00:06 crc kubenswrapper[4727]: I0109 12:00:06.776047 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xvvzt_e9d515de-9700-4c41-97f0-317214f0a7bb/frr/0.log" Jan 09 12:00:06 crc kubenswrapper[4727]: I0109 12:00:06.872691 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b68a71-edf6-4fe6-8f5c-92b1424309c6" path="/var/lib/kubelet/pods/12b68a71-edf6-4fe6-8f5c-92b1424309c6/volumes" Jan 09 12:00:17 crc kubenswrapper[4727]: I0109 12:00:17.860796 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:17 crc kubenswrapper[4727]: E0109 12:00:17.863666 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:20 crc kubenswrapper[4727]: I0109 12:00:20.849039 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.208545 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.276703 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.313457 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.512217 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.532672 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/pull/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.539542 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4zrss4_af495843-7098-4ea5-9898-8a19dd9a0197/extract/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.736327 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.852384 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.907080 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 12:00:21 crc kubenswrapper[4727]: I0109 12:00:21.918882 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.134365 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/pull/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.142914 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/extract/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.154155 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8r5kc9_fb997fa3-0e55-46ca-b666-d4b710fe2bef/util/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.394274 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-utilities/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.591022 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-utilities/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.593788 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-content/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.630157 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-content/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.785106 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-utilities/0.log" Jan 09 12:00:22 crc kubenswrapper[4727]: I0109 12:00:22.834820 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/extract-content/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.009482 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.017493 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4tm96_26aacbc8-deff-4e22-931d-552244f5bfcc/registry-server/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.185577 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.185606 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.243138 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.414738 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-content/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.433058 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/extract-utilities/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.618130 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-55prz_82b1f92b-6077-4b4c-876a-3d732a78b2cc/marketplace-operator/0.log" Jan 09 12:00:23 crc kubenswrapper[4727]: I0109 12:00:23.773430 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.003567 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fbk2g_5045256f-167a-4bdd-b1dc-3b052bbdfeb6/registry-server/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.025334 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.081296 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.117584 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.778158 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-content/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.805922 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/extract-utilities/0.log" Jan 09 12:00:24 crc kubenswrapper[4727]: I0109 12:00:24.937926 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vc94w_9334dd96-d38c-460b-a258-2bccfc2960d5/registry-server/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.032498 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.399119 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.463951 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.496377 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.655154 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-content/0.log" Jan 09 12:00:25 crc kubenswrapper[4727]: I0109 12:00:25.671030 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/extract-utilities/0.log" Jan 09 12:00:26 crc kubenswrapper[4727]: I0109 12:00:26.325881 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gdvvw_86044c1d-9cd9-49f7-b906-011e3856e591/registry-server/0.log" Jan 09 12:00:28 crc kubenswrapper[4727]: I0109 12:00:28.860889 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:28 crc kubenswrapper[4727]: E0109 12:00:28.862129 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:41 crc kubenswrapper[4727]: I0109 12:00:41.166948 4727 scope.go:117] "RemoveContainer" containerID="84a8b1baf290e07735a8257dd39380cfb20abc093c31bd1ad4ffdd674f8e0709" Jan 09 12:00:41 crc kubenswrapper[4727]: I0109 12:00:41.861222 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:41 crc kubenswrapper[4727]: E0109 12:00:41.861872 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:56 crc kubenswrapper[4727]: I0109 12:00:56.861229 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:00:56 crc kubenswrapper[4727]: E0109 12:00:56.862409 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.038485 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:00:59 crc kubenswrapper[4727]: E0109 12:00:59.039554 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" containerName="collect-profiles" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.039571 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" containerName="collect-profiles" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.039883 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="481d9d2d-4b03-4fb1-98a3-f861f7fd5caa" containerName="collect-profiles" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.041862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.055952 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.124328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.124428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.124603 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.226459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.226546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.226668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.227115 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.227346 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.257664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") pod \"redhat-operators-4jlnh\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:00:59 crc kubenswrapper[4727]: I0109 12:00:59.388558 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.036761 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.167356 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29466001-jz589"] Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.169249 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.184278 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29466001-jz589"] Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.254126 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.254480 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.254583 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.254669 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.356816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.356948 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.356975 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.357003 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.639715 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.640195 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.640967 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.641849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") pod \"keystone-cron-29466001-jz589\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:00 crc kubenswrapper[4727]: I0109 12:01:00.796225 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.409489 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29466001-jz589"] Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.632094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466001-jz589" event={"ID":"e3394060-4f97-480d-8271-7fb514f60bc0","Type":"ContainerStarted","Data":"49eff92641b572fd3ae79f283a74f80a140d11bbb3959bc4fb63406948b417d3"} Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.635335 4727 generic.go:334] "Generic (PLEG): container finished" podID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerID="269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb" exitCode=0 Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.635476 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerDied","Data":"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb"} Jan 09 12:01:01 crc kubenswrapper[4727]: I0109 12:01:01.635818 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerStarted","Data":"c5f7699f3f27e94e26b48ccbdf25b86dac98761208083e17a39c25c60ddb3ed1"} Jan 09 12:01:02 crc kubenswrapper[4727]: E0109 12:01:02.242627 4727 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.200:36970->38.102.83.200:46169: read tcp 38.102.83.200:36970->38.102.83.200:46169: read: connection reset by peer Jan 09 12:01:02 crc kubenswrapper[4727]: I0109 12:01:02.648671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466001-jz589" event={"ID":"e3394060-4f97-480d-8271-7fb514f60bc0","Type":"ContainerStarted","Data":"33c981394c1c7d5789f6284f030f834f28da49bd617b21080568612b55ba0cd0"} Jan 09 12:01:02 crc kubenswrapper[4727]: I0109 12:01:02.680273 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29466001-jz589" podStartSLOduration=2.680245249 podStartE2EDuration="2.680245249s" podCreationTimestamp="2026-01-09 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 12:01:02.670208988 +0000 UTC m=+4508.120113769" watchObservedRunningTime="2026-01-09 12:01:02.680245249 +0000 UTC m=+4508.130150030" Jan 09 12:01:04 crc kubenswrapper[4727]: I0109 12:01:04.672675 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerStarted","Data":"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f"} Jan 09 12:01:05 crc kubenswrapper[4727]: I0109 12:01:05.687115 4727 generic.go:334] "Generic (PLEG): container finished" podID="e3394060-4f97-480d-8271-7fb514f60bc0" containerID="33c981394c1c7d5789f6284f030f834f28da49bd617b21080568612b55ba0cd0" exitCode=0 Jan 09 12:01:05 crc kubenswrapper[4727]: I0109 12:01:05.687216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466001-jz589" event={"ID":"e3394060-4f97-480d-8271-7fb514f60bc0","Type":"ContainerDied","Data":"33c981394c1c7d5789f6284f030f834f28da49bd617b21080568612b55ba0cd0"} Jan 09 12:01:05 crc kubenswrapper[4727]: I0109 12:01:05.690103 4727 generic.go:334] "Generic (PLEG): container finished" podID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerID="850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f" exitCode=0 Jan 09 12:01:05 crc kubenswrapper[4727]: I0109 12:01:05.690222 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerDied","Data":"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f"} Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.084883 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.147745 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") pod \"e3394060-4f97-480d-8271-7fb514f60bc0\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.147837 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") pod \"e3394060-4f97-480d-8271-7fb514f60bc0\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.148379 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") pod \"e3394060-4f97-480d-8271-7fb514f60bc0\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.148470 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") pod \"e3394060-4f97-480d-8271-7fb514f60bc0\" (UID: \"e3394060-4f97-480d-8271-7fb514f60bc0\") " Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.156114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89" (OuterVolumeSpecName: "kube-api-access-c4w89") pod "e3394060-4f97-480d-8271-7fb514f60bc0" (UID: "e3394060-4f97-480d-8271-7fb514f60bc0"). InnerVolumeSpecName "kube-api-access-c4w89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.156701 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3394060-4f97-480d-8271-7fb514f60bc0" (UID: "e3394060-4f97-480d-8271-7fb514f60bc0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.181278 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3394060-4f97-480d-8271-7fb514f60bc0" (UID: "e3394060-4f97-480d-8271-7fb514f60bc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.231157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data" (OuterVolumeSpecName: "config-data") pod "e3394060-4f97-480d-8271-7fb514f60bc0" (UID: "e3394060-4f97-480d-8271-7fb514f60bc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.251105 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.251153 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.251166 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4w89\" (UniqueName: \"kubernetes.io/projected/e3394060-4f97-480d-8271-7fb514f60bc0-kube-api-access-c4w89\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.251180 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3394060-4f97-480d-8271-7fb514f60bc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.713716 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29466001-jz589" event={"ID":"e3394060-4f97-480d-8271-7fb514f60bc0","Type":"ContainerDied","Data":"49eff92641b572fd3ae79f283a74f80a140d11bbb3959bc4fb63406948b417d3"} Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.713825 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49eff92641b572fd3ae79f283a74f80a140d11bbb3959bc4fb63406948b417d3" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.713873 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29466001-jz589" Jan 09 12:01:07 crc kubenswrapper[4727]: I0109 12:01:07.861680 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:07 crc kubenswrapper[4727]: E0109 12:01:07.862844 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:01:09 crc kubenswrapper[4727]: I0109 12:01:09.739120 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerStarted","Data":"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5"} Jan 09 12:01:09 crc kubenswrapper[4727]: I0109 12:01:09.764627 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4jlnh" podStartSLOduration=3.701531347 podStartE2EDuration="10.764599762s" podCreationTimestamp="2026-01-09 12:00:59 +0000 UTC" firstStartedPulling="2026-01-09 12:01:01.637951812 +0000 UTC m=+4507.087856593" lastFinishedPulling="2026-01-09 12:01:08.701020227 +0000 UTC m=+4514.150925008" observedRunningTime="2026-01-09 12:01:09.759085973 +0000 UTC m=+4515.208990764" watchObservedRunningTime="2026-01-09 12:01:09.764599762 +0000 UTC m=+4515.214504553" Jan 09 12:01:14 crc kubenswrapper[4727]: I0109 12:01:14.628423 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-xqcqv" podUID="414cbbdd-31b2-4eae-84a7-33cd1a4961b5" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.389170 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.391264 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.453197 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.895259 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:19 crc kubenswrapper[4727]: I0109 12:01:19.961060 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:01:20 crc kubenswrapper[4727]: I0109 12:01:20.861401 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:20 crc kubenswrapper[4727]: E0109 12:01:20.861791 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:01:21 crc kubenswrapper[4727]: I0109 12:01:21.863112 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4jlnh" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="registry-server" containerID="cri-o://03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" gracePeriod=2 Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.430928 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.540129 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") pod \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.540320 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") pod \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.540410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") pod \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\" (UID: \"cfd08a1b-1ead-450f-b0e6-ea316b43a425\") " Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.541410 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities" (OuterVolumeSpecName: "utilities") pod "cfd08a1b-1ead-450f-b0e6-ea316b43a425" (UID: "cfd08a1b-1ead-450f-b0e6-ea316b43a425"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.546954 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl" (OuterVolumeSpecName: "kube-api-access-l2dsl") pod "cfd08a1b-1ead-450f-b0e6-ea316b43a425" (UID: "cfd08a1b-1ead-450f-b0e6-ea316b43a425"). InnerVolumeSpecName "kube-api-access-l2dsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.643385 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2dsl\" (UniqueName: \"kubernetes.io/projected/cfd08a1b-1ead-450f-b0e6-ea316b43a425-kube-api-access-l2dsl\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.643465 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.689690 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfd08a1b-1ead-450f-b0e6-ea316b43a425" (UID: "cfd08a1b-1ead-450f-b0e6-ea316b43a425"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.745958 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd08a1b-1ead-450f-b0e6-ea316b43a425-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876239 4727 generic.go:334] "Generic (PLEG): container finished" podID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerID="03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" exitCode=0 Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerDied","Data":"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5"} Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876602 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jlnh" event={"ID":"cfd08a1b-1ead-450f-b0e6-ea316b43a425","Type":"ContainerDied","Data":"c5f7699f3f27e94e26b48ccbdf25b86dac98761208083e17a39c25c60ddb3ed1"} Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876626 4727 scope.go:117] "RemoveContainer" containerID="03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.876796 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jlnh" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.906781 4727 scope.go:117] "RemoveContainer" containerID="850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.937330 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.944676 4727 scope.go:117] "RemoveContainer" containerID="269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.948967 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4jlnh"] Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.988693 4727 scope.go:117] "RemoveContainer" containerID="03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" Jan 09 12:01:22 crc kubenswrapper[4727]: E0109 12:01:22.989260 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5\": container with ID starting with 03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5 not found: ID does not exist" containerID="03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.989313 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5"} err="failed to get container status \"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5\": rpc error: code = NotFound desc = could not find container \"03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5\": container with ID starting with 03773eb80b5796cf9a5e44c277cfc310ad3e733ab2bda745c19489b2986ba7d5 not found: ID does not exist" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.989351 4727 scope.go:117] "RemoveContainer" containerID="850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f" Jan 09 12:01:22 crc kubenswrapper[4727]: E0109 12:01:22.990112 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f\": container with ID starting with 850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f not found: ID does not exist" containerID="850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.990191 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f"} err="failed to get container status \"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f\": rpc error: code = NotFound desc = could not find container \"850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f\": container with ID starting with 850132a12fb1077ddd292083f161b7769786c79c8f89ed436930563ee29e5a8f not found: ID does not exist" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.990234 4727 scope.go:117] "RemoveContainer" containerID="269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb" Jan 09 12:01:22 crc kubenswrapper[4727]: E0109 12:01:22.990777 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb\": container with ID starting with 269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb not found: ID does not exist" containerID="269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb" Jan 09 12:01:22 crc kubenswrapper[4727]: I0109 12:01:22.990820 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb"} err="failed to get container status \"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb\": rpc error: code = NotFound desc = could not find container \"269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb\": container with ID starting with 269b75f56c10a66c985be21d299ad664bda84ff6565a7b5d011ba78f5c1cf5eb not found: ID does not exist" Jan 09 12:01:24 crc kubenswrapper[4727]: I0109 12:01:24.878693 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" path="/var/lib/kubelet/pods/cfd08a1b-1ead-450f-b0e6-ea316b43a425/volumes" Jan 09 12:01:32 crc kubenswrapper[4727]: I0109 12:01:32.861749 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:32 crc kubenswrapper[4727]: E0109 12:01:32.862576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:01:44 crc kubenswrapper[4727]: I0109 12:01:44.861606 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:44 crc kubenswrapper[4727]: E0109 12:01:44.862733 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:01:57 crc kubenswrapper[4727]: I0109 12:01:57.860636 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:01:57 crc kubenswrapper[4727]: E0109 12:01:57.861892 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.507156 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:07 crc kubenswrapper[4727]: E0109 12:02:07.508753 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="extract-content" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509593 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="extract-content" Jan 09 12:02:07 crc kubenswrapper[4727]: E0109 12:02:07.509622 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="registry-server" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509628 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="registry-server" Jan 09 12:02:07 crc kubenswrapper[4727]: E0109 12:02:07.509653 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3394060-4f97-480d-8271-7fb514f60bc0" containerName="keystone-cron" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509659 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3394060-4f97-480d-8271-7fb514f60bc0" containerName="keystone-cron" Jan 09 12:02:07 crc kubenswrapper[4727]: E0109 12:02:07.509692 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="extract-utilities" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509698 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="extract-utilities" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509891 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd08a1b-1ead-450f-b0e6-ea316b43a425" containerName="registry-server" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.509901 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3394060-4f97-480d-8271-7fb514f60bc0" containerName="keystone-cron" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.511717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.528499 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.638908 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.638982 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.639047 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.741042 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.741480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.741606 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.741655 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.742304 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.836984 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") pod \"redhat-marketplace-vk2s7\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:07 crc kubenswrapper[4727]: I0109 12:02:07.843193 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:08 crc kubenswrapper[4727]: I0109 12:02:08.404501 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:09 crc kubenswrapper[4727]: I0109 12:02:09.361493 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerID="2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9" exitCode=0 Jan 09 12:02:09 crc kubenswrapper[4727]: I0109 12:02:09.361765 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerDied","Data":"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9"} Jan 09 12:02:09 crc kubenswrapper[4727]: I0109 12:02:09.361947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerStarted","Data":"726bdc9b07fcbf4f96565bf46eb1c11bc1b653a21d6227964a5682ae96f79882"} Jan 09 12:02:10 crc kubenswrapper[4727]: I0109 12:02:10.860876 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:02:10 crc kubenswrapper[4727]: E0109 12:02:10.861647 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:02:11 crc kubenswrapper[4727]: I0109 12:02:11.401498 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerID="fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf" exitCode=0 Jan 09 12:02:11 crc kubenswrapper[4727]: I0109 12:02:11.401748 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerDied","Data":"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf"} Jan 09 12:02:12 crc kubenswrapper[4727]: I0109 12:02:12.414659 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerStarted","Data":"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac"} Jan 09 12:02:12 crc kubenswrapper[4727]: I0109 12:02:12.441909 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vk2s7" podStartSLOduration=2.877529828 podStartE2EDuration="5.441883423s" podCreationTimestamp="2026-01-09 12:02:07 +0000 UTC" firstStartedPulling="2026-01-09 12:02:09.365135535 +0000 UTC m=+4574.815040316" lastFinishedPulling="2026-01-09 12:02:11.92948913 +0000 UTC m=+4577.379393911" observedRunningTime="2026-01-09 12:02:12.431889061 +0000 UTC m=+4577.881793862" watchObservedRunningTime="2026-01-09 12:02:12.441883423 +0000 UTC m=+4577.891788194" Jan 09 12:02:17 crc kubenswrapper[4727]: I0109 12:02:17.843394 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:17 crc kubenswrapper[4727]: I0109 12:02:17.844250 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:17 crc kubenswrapper[4727]: I0109 12:02:17.891165 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:18 crc kubenswrapper[4727]: I0109 12:02:18.514410 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:18 crc kubenswrapper[4727]: I0109 12:02:18.600668 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.484659 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vk2s7" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="registry-server" containerID="cri-o://b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" gracePeriod=2 Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.925261 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.935884 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") pod \"c6d41a8d-df27-42f8-8d05-c763c454fafd\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.935933 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") pod \"c6d41a8d-df27-42f8-8d05-c763c454fafd\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.935967 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") pod \"c6d41a8d-df27-42f8-8d05-c763c454fafd\" (UID: \"c6d41a8d-df27-42f8-8d05-c763c454fafd\") " Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.939079 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities" (OuterVolumeSpecName: "utilities") pod "c6d41a8d-df27-42f8-8d05-c763c454fafd" (UID: "c6d41a8d-df27-42f8-8d05-c763c454fafd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.969106 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6d41a8d-df27-42f8-8d05-c763c454fafd" (UID: "c6d41a8d-df27-42f8-8d05-c763c454fafd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:02:20 crc kubenswrapper[4727]: I0109 12:02:20.981353 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq" (OuterVolumeSpecName: "kube-api-access-2nwwq") pod "c6d41a8d-df27-42f8-8d05-c763c454fafd" (UID: "c6d41a8d-df27-42f8-8d05-c763c454fafd"). InnerVolumeSpecName "kube-api-access-2nwwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.038418 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.038460 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d41a8d-df27-42f8-8d05-c763c454fafd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.038474 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nwwq\" (UniqueName: \"kubernetes.io/projected/c6d41a8d-df27-42f8-8d05-c763c454fafd-kube-api-access-2nwwq\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498228 4727 generic.go:334] "Generic (PLEG): container finished" podID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerID="b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" exitCode=0 Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerDied","Data":"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac"} Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498325 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk2s7" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498346 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk2s7" event={"ID":"c6d41a8d-df27-42f8-8d05-c763c454fafd","Type":"ContainerDied","Data":"726bdc9b07fcbf4f96565bf46eb1c11bc1b653a21d6227964a5682ae96f79882"} Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.498374 4727 scope.go:117] "RemoveContainer" containerID="b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.537038 4727 scope.go:117] "RemoveContainer" containerID="fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.545948 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.558930 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk2s7"] Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.581144 4727 scope.go:117] "RemoveContainer" containerID="2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.615226 4727 scope.go:117] "RemoveContainer" containerID="b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" Jan 09 12:02:21 crc kubenswrapper[4727]: E0109 12:02:21.615843 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac\": container with ID starting with b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac not found: ID does not exist" containerID="b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.615876 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac"} err="failed to get container status \"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac\": rpc error: code = NotFound desc = could not find container \"b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac\": container with ID starting with b1c10e0dfce394afec05b95d180b4e9965dfadbc6fd98596c1a7b6e9fb3b79ac not found: ID does not exist" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.615902 4727 scope.go:117] "RemoveContainer" containerID="fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf" Jan 09 12:02:21 crc kubenswrapper[4727]: E0109 12:02:21.616123 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf\": container with ID starting with fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf not found: ID does not exist" containerID="fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.616154 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf"} err="failed to get container status \"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf\": rpc error: code = NotFound desc = could not find container \"fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf\": container with ID starting with fdfde30eb2809cc8a8e72ca00706f14cd8c6c6aa001677b6d9dbbdda8c3aaebf not found: ID does not exist" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.616179 4727 scope.go:117] "RemoveContainer" containerID="2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9" Jan 09 12:02:21 crc kubenswrapper[4727]: E0109 12:02:21.616903 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9\": container with ID starting with 2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9 not found: ID does not exist" containerID="2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9" Jan 09 12:02:21 crc kubenswrapper[4727]: I0109 12:02:21.616927 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9"} err="failed to get container status \"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9\": rpc error: code = NotFound desc = could not find container \"2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9\": container with ID starting with 2e008d33eccea68e2fcc30dfa7a051e8150368aa43ce99a191888e3ccc8c9ee9 not found: ID does not exist" Jan 09 12:02:22 crc kubenswrapper[4727]: I0109 12:02:22.874301 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" path="/var/lib/kubelet/pods/c6d41a8d-df27-42f8-8d05-c763c454fafd/volumes" Jan 09 12:02:23 crc kubenswrapper[4727]: I0109 12:02:23.861378 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:02:23 crc kubenswrapper[4727]: E0109 12:02:23.861976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:02:28 crc kubenswrapper[4727]: I0109 12:02:28.570500 4727 generic.go:334] "Generic (PLEG): container finished" podID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerID="97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4" exitCode=0 Jan 09 12:02:28 crc kubenswrapper[4727]: I0109 12:02:28.570561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" event={"ID":"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa","Type":"ContainerDied","Data":"97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4"} Jan 09 12:02:28 crc kubenswrapper[4727]: I0109 12:02:28.571853 4727 scope.go:117] "RemoveContainer" containerID="97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4" Jan 09 12:02:29 crc kubenswrapper[4727]: I0109 12:02:29.470175 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z2dx8_must-gather-pnnsk_6406f2a3-a4e6-4379-a2a6-adcc1eb952fa/gather/0.log" Jan 09 12:02:35 crc kubenswrapper[4727]: I0109 12:02:35.861632 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:02:35 crc kubenswrapper[4727]: E0109 12:02:35.862802 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:02:40 crc kubenswrapper[4727]: I0109 12:02:40.691627 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 12:02:40 crc kubenswrapper[4727]: I0109 12:02:40.692718 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="copy" containerID="cri-o://ba3fec2faa6d34d88b2c0ab138a91ee7a89e044844462fc1ed9ddd8ff5e29edf" gracePeriod=2 Jan 09 12:02:40 crc kubenswrapper[4727]: I0109 12:02:40.704026 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2dx8/must-gather-pnnsk"] Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.223460 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z2dx8_must-gather-pnnsk_6406f2a3-a4e6-4379-a2a6-adcc1eb952fa/copy/0.log" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.224551 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.287010 4727 scope.go:117] "RemoveContainer" containerID="8eaa00e81b8c71507cd8bd7cbb7af780404b4571231b253f7cd04b4dbaf83431" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.326843 4727 scope.go:117] "RemoveContainer" containerID="ba3fec2faa6d34d88b2c0ab138a91ee7a89e044844462fc1ed9ddd8ff5e29edf" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.362171 4727 scope.go:117] "RemoveContainer" containerID="97f8aa93d554794fd7bfe9bfbe80043d24392feadfcb8ad66055cd8b3a2b7ed4" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.384797 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") pod \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.384912 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") pod \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\" (UID: \"6406f2a3-a4e6-4379-a2a6-adcc1eb952fa\") " Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.396818 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv" (OuterVolumeSpecName: "kube-api-access-hzlhv") pod "6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" (UID: "6406f2a3-a4e6-4379-a2a6-adcc1eb952fa"). InnerVolumeSpecName "kube-api-access-hzlhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.487891 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzlhv\" (UniqueName: \"kubernetes.io/projected/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-kube-api-access-hzlhv\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.574046 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" (UID: "6406f2a3-a4e6-4379-a2a6-adcc1eb952fa"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.589811 4727 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 09 12:02:41 crc kubenswrapper[4727]: I0109 12:02:41.699593 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2dx8/must-gather-pnnsk" Jan 09 12:02:42 crc kubenswrapper[4727]: I0109 12:02:42.881458 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" path="/var/lib/kubelet/pods/6406f2a3-a4e6-4379-a2a6-adcc1eb952fa/volumes" Jan 09 12:02:48 crc kubenswrapper[4727]: I0109 12:02:48.860591 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:02:48 crc kubenswrapper[4727]: E0109 12:02:48.861692 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:03:03 crc kubenswrapper[4727]: I0109 12:03:03.860733 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:03:03 crc kubenswrapper[4727]: E0109 12:03:03.861922 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:03:14 crc kubenswrapper[4727]: I0109 12:03:14.867388 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:03:14 crc kubenswrapper[4727]: E0109 12:03:14.868694 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:03:29 crc kubenswrapper[4727]: I0109 12:03:29.861456 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:03:29 crc kubenswrapper[4727]: E0109 12:03:29.862644 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hzdp7_openshift-machine-config-operator(ea573637-1ca1-4211-8c88-9bc9fa78d6c4)\"" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" Jan 09 12:03:43 crc kubenswrapper[4727]: I0109 12:03:43.861267 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" Jan 09 12:03:44 crc kubenswrapper[4727]: I0109 12:03:44.317482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"f17b544e60259a44fbe58f713bbb533f08e919f7e326182faa062d2e8e4fead0"} Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.123305 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126300 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="extract-utilities" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126330 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="extract-utilities" Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126351 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="registry-server" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126364 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="registry-server" Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126380 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="copy" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126388 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="copy" Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126402 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="extract-content" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126410 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="extract-content" Jan 09 12:04:15 crc kubenswrapper[4727]: E0109 12:04:15.126425 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="gather" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126433 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="gather" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126675 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d41a8d-df27-42f8-8d05-c763c454fafd" containerName="registry-server" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126706 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="copy" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.126722 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6406f2a3-a4e6-4379-a2a6-adcc1eb952fa" containerName="gather" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.128587 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.142917 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.164199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.164314 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.164463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.266626 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.266700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.266774 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.267430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.267545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.291699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") pod \"certified-operators-zkn2p\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:15 crc kubenswrapper[4727]: I0109 12:04:15.452339 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.017732 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:16 crc kubenswrapper[4727]: W0109 12:04:16.025089 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod303bd47d_8182_4a89_bd15_9ca2b7d6101d.slice/crio-26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500 WatchSource:0}: Error finding container 26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500: Status 404 returned error can't find the container with id 26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500 Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.667012 4727 generic.go:334] "Generic (PLEG): container finished" podID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" containerID="ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91" exitCode=0 Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.667476 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerDied","Data":"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91"} Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.667616 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerStarted","Data":"26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500"} Jan 09 12:04:16 crc kubenswrapper[4727]: I0109 12:04:16.670723 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 12:04:18 crc kubenswrapper[4727]: I0109 12:04:18.687964 4727 generic.go:334] "Generic (PLEG): container finished" podID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" containerID="324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12" exitCode=0 Jan 09 12:04:18 crc kubenswrapper[4727]: I0109 12:04:18.688059 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerDied","Data":"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12"} Jan 09 12:04:19 crc kubenswrapper[4727]: I0109 12:04:19.703159 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerStarted","Data":"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3"} Jan 09 12:04:19 crc kubenswrapper[4727]: I0109 12:04:19.739077 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zkn2p" podStartSLOduration=2.102521563 podStartE2EDuration="4.739054385s" podCreationTimestamp="2026-01-09 12:04:15 +0000 UTC" firstStartedPulling="2026-01-09 12:04:16.670227975 +0000 UTC m=+4702.120132756" lastFinishedPulling="2026-01-09 12:04:19.306760797 +0000 UTC m=+4704.756665578" observedRunningTime="2026-01-09 12:04:19.727418899 +0000 UTC m=+4705.177323680" watchObservedRunningTime="2026-01-09 12:04:19.739054385 +0000 UTC m=+4705.188959166" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.452539 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.453306 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.513837 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.815562 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:25 crc kubenswrapper[4727]: I0109 12:04:25.882091 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:27 crc kubenswrapper[4727]: I0109 12:04:27.790130 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zkn2p" podUID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" containerName="registry-server" containerID="cri-o://6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" gracePeriod=2 Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.220232 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.293033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") pod \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.293495 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") pod \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.293893 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") pod \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\" (UID: \"303bd47d-8182-4a89-bd15-9ca2b7d6101d\") " Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.298447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities" (OuterVolumeSpecName: "utilities") pod "303bd47d-8182-4a89-bd15-9ca2b7d6101d" (UID: "303bd47d-8182-4a89-bd15-9ca2b7d6101d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.396174 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.467204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "303bd47d-8182-4a89-bd15-9ca2b7d6101d" (UID: "303bd47d-8182-4a89-bd15-9ca2b7d6101d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.498369 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303bd47d-8182-4a89-bd15-9ca2b7d6101d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803577 4727 generic.go:334] "Generic (PLEG): container finished" podID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" containerID="6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" exitCode=0 Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803633 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerDied","Data":"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3"} Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803674 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkn2p" event={"ID":"303bd47d-8182-4a89-bd15-9ca2b7d6101d","Type":"ContainerDied","Data":"26c558521b5be16c7c5518a74955bd7ddc5dae147ae5e0de3de23dc608f65500"} Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803711 4727 scope.go:117] "RemoveContainer" containerID="6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.803711 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkn2p" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.830655 4727 scope.go:117] "RemoveContainer" containerID="324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.835913 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw" (OuterVolumeSpecName: "kube-api-access-6k8dw") pod "303bd47d-8182-4a89-bd15-9ca2b7d6101d" (UID: "303bd47d-8182-4a89-bd15-9ca2b7d6101d"). InnerVolumeSpecName "kube-api-access-6k8dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 12:04:28 crc kubenswrapper[4727]: I0109 12:04:28.905606 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8dw\" (UniqueName: \"kubernetes.io/projected/303bd47d-8182-4a89-bd15-9ca2b7d6101d-kube-api-access-6k8dw\") on node \"crc\" DevicePath \"\"" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.090739 4727 scope.go:117] "RemoveContainer" containerID="ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.152546 4727 scope.go:117] "RemoveContainer" containerID="6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" Jan 09 12:04:29 crc kubenswrapper[4727]: E0109 12:04:29.154683 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3\": container with ID starting with 6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3 not found: ID does not exist" containerID="6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.154738 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3"} err="failed to get container status \"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3\": rpc error: code = NotFound desc = could not find container \"6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3\": container with ID starting with 6e186153aef0cc7bd137f07d5d7534f061da6205bfaf484e7687c1f5c8363cd3 not found: ID does not exist" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.154771 4727 scope.go:117] "RemoveContainer" containerID="324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12" Jan 09 12:04:29 crc kubenswrapper[4727]: E0109 12:04:29.155377 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12\": container with ID starting with 324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12 not found: ID does not exist" containerID="324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.155413 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12"} err="failed to get container status \"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12\": rpc error: code = NotFound desc = could not find container \"324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12\": container with ID starting with 324bfc76a6501fd378f8fd880f3152b47505ee9f138ca91d1941e3a7b6dcbb12 not found: ID does not exist" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.155437 4727 scope.go:117] "RemoveContainer" containerID="ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91" Jan 09 12:04:29 crc kubenswrapper[4727]: E0109 12:04:29.155951 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91\": container with ID starting with ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91 not found: ID does not exist" containerID="ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.155983 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91"} err="failed to get container status \"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91\": rpc error: code = NotFound desc = could not find container \"ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91\": container with ID starting with ab5a424ff60e3a899b4ff57fe024dbdaf0b058b61a1a7cc81c9385c7e57f2e91 not found: ID does not exist" Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.206434 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:29 crc kubenswrapper[4727]: I0109 12:04:29.216584 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zkn2p"] Jan 09 12:04:30 crc kubenswrapper[4727]: I0109 12:04:30.873126 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303bd47d-8182-4a89-bd15-9ca2b7d6101d" path="/var/lib/kubelet/pods/303bd47d-8182-4a89-bd15-9ca2b7d6101d/volumes" Jan 09 12:04:59 crc kubenswrapper[4727]: I0109 12:04:59.084897 4727 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod303bd47d-8182-4a89-bd15-9ca2b7d6101d"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod303bd47d-8182-4a89-bd15-9ca2b7d6101d] : Timed out while waiting for systemd to remove kubepods-burstable-pod303bd47d_8182_4a89_bd15_9ca2b7d6101d.slice" Jan 09 12:06:09 crc kubenswrapper[4727]: I0109 12:06:09.405145 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 12:06:09 crc kubenswrapper[4727]: I0109 12:06:09.406198 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 12:06:39 crc kubenswrapper[4727]: I0109 12:06:39.404695 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 12:06:39 crc kubenswrapper[4727]: I0109 12:06:39.405385 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 12:07:09 crc kubenswrapper[4727]: I0109 12:07:09.406177 4727 patch_prober.go:28] interesting pod/machine-config-daemon-hzdp7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 12:07:09 crc kubenswrapper[4727]: I0109 12:07:09.406906 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 12:07:09 crc kubenswrapper[4727]: I0109 12:07:09.407016 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" Jan 09 12:07:09 crc kubenswrapper[4727]: I0109 12:07:09.408563 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f17b544e60259a44fbe58f713bbb533f08e919f7e326182faa062d2e8e4fead0"} pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 12:07:09 crc kubenswrapper[4727]: I0109 12:07:09.408697 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" podUID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerName="machine-config-daemon" containerID="cri-o://f17b544e60259a44fbe58f713bbb533f08e919f7e326182faa062d2e8e4fead0" gracePeriod=600 Jan 09 12:07:10 crc kubenswrapper[4727]: I0109 12:07:10.429777 4727 generic.go:334] "Generic (PLEG): container finished" podID="ea573637-1ca1-4211-8c88-9bc9fa78d6c4" containerID="f17b544e60259a44fbe58f713bbb533f08e919f7e326182faa062d2e8e4fead0" exitCode=0 Jan 09 12:07:10 crc kubenswrapper[4727]: I0109 12:07:10.430234 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerDied","Data":"f17b544e60259a44fbe58f713bbb533f08e919f7e326182faa062d2e8e4fead0"} Jan 09 12:07:10 crc kubenswrapper[4727]: I0109 12:07:10.430285 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hzdp7" event={"ID":"ea573637-1ca1-4211-8c88-9bc9fa78d6c4","Type":"ContainerStarted","Data":"362c1c65626caf30fa3897a3355c732c0f759df8a348b70da01776b5e74d8251"} Jan 09 12:07:10 crc kubenswrapper[4727]: I0109 12:07:10.430313 4727 scope.go:117] "RemoveContainer" containerID="968b25b654221c4c527c97b70636d3edca26d8dfba56dc7cc8b9d4d63c112814" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515130167600024444 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015130167601017362 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015130155645016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015130155645015462 5ustar corecore